Flutter - more efficient pan and zoom for CustomPaint - performance

I'm rendering a collection of grids of tiles, where each tile is pulled from an image. To render this, I'm rendering everything inside my own implementation of CustomPainter (because the grids can get pretty large). To support pan and zoom functionality, I opted to perform the offsetting and scaling as part of canvas painting.
Here is a portion of my custom painting implementation.
#override
void paint(Canvas canvas, Size size) {
// With the new canvas size, we may have new constraints on min/max offset/scale.
zoom.adjust(
containerSize: size,
contentSize: Size(
(cellWidth * columnCount).toDouble(),
(cellHeight * rowCount).toDouble(),
),
);
canvas.save();
canvas.translate(zoom.offset.dx, zoom.offset.dy);
canvas.scale(zoom.scale);
// Now, draw the background image and grids.
While this is functional, performance can start to breakdown after enough cells are rendered (for example, a grid of 100x100 causes some lag on each GestureDetector callback that updates the zoom values). And, because the offsetting and scaling is done in the CustomPaint, I basically can't return false for bool shouldRepaint(MyPainter old) because it needs to repaint to render its new offset and scale.
So, my question is: What is a more performant way of approaching this problem?
I've tried one other approach:
var separateRenderTree = RepaintBoundary(
child: OverflowBox(
child: CustomPaint(
painter: MyPainter(),
),
),
);
return Transform(
transform: Matrix4.translationValues(_zoom.offset.dx, _zoom.offset.dy, 0)..scale(_zoom.scale),
child: separateRenderTree,
);
This also works, but can also get laggy when scaling (translating is buttery smooth).
So, again, what is the right approach to this problem?
Thank you.

Here's where I ended up.
I size my custom painter to be as large as it needs, and then I position it inside a Transform widget (that is top-left aligned with an offset of zero).
On top of this widget I overlay an invisible widget that manages touch inputs. Using a GestureDetector, it will respond to events and notify the Transform widget to update.
With the pan/zoom officially moved out of the painter, I then implemented the "shouldRepaint" function to be more strict.
This has allowed me to render very, very large grids at good-enough speeds.

Related

p5js only drawing what's needed

I'm looking for a way to limit what gets done in the draw loop.
I have a system where when I click, it add's a rect.
This rect then starts spawning circles that move.
since the rect does not change location, it isn't ideal to redraw it in every frame.
Is there a way to put the rects on a different layer of sorts, or is there another mechanism that I can use to limit the rect-drawing without impeding the circle-drawing?
I've tried with createGraphic to make a background with the rects, but I can't make the 'foreground' where the circles reside to be transparant.
Curious about this I tried myself. My idea was simply grabbing the canvas and interacting with it regardless of p5.js.
My result was that the draw... in this case ctx.fillRect did not render on screen.
However the fillStyle was changed.
Canvas is surprisingly efficient as well as WebGL and can handle the performance usually... unless you are rendering hundreds(mobile) to thousands(laptop/desktop) of objects.
I would have liked to have a better outcome but I think it was worthwhile posting what I had tried and my outcome nonetheless.
//P5 Setup
function setup(){
createCanvas(1500, 750);
background('rgba(0, 0, 0, 0.3)');
stroke(255);
fill(255)
doNonP5Drawing();
}
//Render
function draw(){
background(0);
frame();
}
function doNonP5Drawing(){
let canvas = document.querySelector('canvas'),
ctx = canvas.getContext('2d');
ctx.fillStyle="red";
ctx.fillRect(canvas.innerWidth/2 - 100,canvas.innerHeight/2 - 100,200,200);
}

Suggestion on how to create a window over a wall

I am working on a three.js application where I have to create a building structure (all on ground floor), the height, width, length will be specified by user. User can change wall and roof color (which are applied using texture, as I have images for each color with some texture). They can also add any accessory on a selected wall (like a window or a door), which can be then dragged and dropped on that same selected wall. After deciding where they want to put the window (for eg.) they will click a button to confirm the position. Now I have to create a window in the wall, so that I can see inside of the room. Please suggest your views on following approaches:
Once the user confirms the position of the door -
a.) I can add the mesh of the window in the main building mesh mainMesh.add(windowMesh);. But the problem is even if I set the transparent material to the window , the wall material still shows.
b.) I can subtract the window mesh from the main building mesh (using CSG, threeCSG) buildingmeshcsg.subtract(windowmeshcsg) which creates a hole in the building mesh, and then I put the window mesh over that hole. Now the problem is after any CSG operation the faces of the original geometry gets all mixed up, so after the csg operation, the color, UV of faces goes away.
c.) I can create wall in small sections,
like from one corner to window corners then, from another window corner to another wall corner. But this messes up the texture I have applied on walls, because I have created UV for front and back walls, as the texture was not applying correctly.
Please suggest your views.
Have to make something like this :https://forum.unity.com/threads/make-a-seethrough-window-without-making-hole-in-the-wall.286393/
THREE * (any version)
This sounds like a good candidate for the stencil buffer. Draw the window and write to stencil, draw the wall around the written values (0) then draw the wall within the written value (1).
These are the methods you are interested in:
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/stencilOp
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/stencilFunc
You need to do this first:
const gl = renderer.getContext() //obtain the actual webgl context, because three has no stencil functionality
And then you need to manage your render logic, this is where Object3D.onBeforeRender callback should help.
So let's assume that you have something like this:
const myWindow, myWall //two meshes, you loaded them, instantiated them
//for a proof of concept you can do something like this
const maskScene = new THREE.Scene()
const wallScene = new THREE.Scene()
const windowScene = new THREE.Scene()
maskScene.add(myWindow)
wallScene.add(myWall)
windowScene.add(myWindow)
render(){
gl.enable(gl.STENCIL_TEST) //enable stencil testing
gl.clearStencil( 0 ) //set stencil clear value
gl.clear( _gl.STENCIL_BUFFER_BIT ) //clear the stencil buffer with set value
gl.stencilFunc( gl.ALWAYS, 1, 1) //always pass the stencil test, with ref 1
gl.stencilOp( gl.REPLACE , gl.REPLACE , gl.REPLACE ) //replace the stencil value with the ref
gl.colorMask(false, false, false, false) //do not write any color
gl.depthMask(false) //do not write to depth
myRenderer.render(maskScene, myCamera) //only the stencil is drawn
//now you have a region in the frame buffer with stencil value of 1, and the rest 0, you can draw the wall in 0 and the window back at 1
gl.colorMask(true, true, true, true) //enable writing
gl.depthMask(true)
gl.stencilFunc( gl.EQUAL , 0 , 1 ) //set the stencil function to EQUAL
gl.stencilOp( gl.KEEP, gl.KEEP, gl.KEEP ) //keep the stencil value in all three tests
myRenderer.render( wallScene, myCamera ) //draw the wall where stencil is 0 (around the window)
gl.stencilFunc( gl.EQUAL , 1 , 1 ) // now func is EQUAL but to value 1
myRenderer.render( windowScene, myCamera ) //draw the window
}
This is the most basic attempt and it is not tested. Since it's working directly with the WebGL API it should work with any version of three. stencilOp takes two more arguments with which you can manage what happens with the depth tests.

drawing lots of identical features — what's the best approach?

I need to draw a large number of rectangles (potentially up to millions), which are located all over the world. I was wondering what the optimal approach is to achieve the best possible performance. my requirements are:
all items are rectangles (not squares), and identical in size and color
their rotation is different per individual item though
they have different fixed locations – they do not move
the rectangles need to be pickable
they might need to be scaled according to current zoom level (to make them look like real objects on the ground)
use the webgl renderer
what I have tried to far:
const features = R.map(
(i) => {
// [...] calculate `coords` and `rotation`
const point = new ol.geom.Point(coords);
const feature = new ol.Feature(point);
feature.__angle = rotation;
return feature;
},
R.range(0, count /* lots of them! */)
);
const sheetStyle = new ol.style.Style({
image: new ol.style.Icon({
size: [5, 8], // shape of rectangle
src: 'color.png' // 1×1px image
})
});
const vectorLayer = new ol.layer.Vector({
source: new ol.source.Vector({ features }),
preload: Infinity,
updateWhileAnimating: true,
updateWhileInteracting: true,
style: (feature, resolution) => {
const image = sheetStyle.getImage();
// TODO: is there a way to only have to do this once?
image.setRotation(feature.__angle);
// scale according to zoom level
image.setScale(0.3 / resolution);
return sheetStyle;
},
});
I was wondering if ol3 was doing any sort of optimization under the hood.
does it merge the geometries into one?
does it only display items that in the visible part of the map?
since all items are identical, is there a way to use instancing?
related: for better performance, I am only creating a single style object that I am reusing for all items. however I need to set a rotation on each of them, which is why I am using a style function. once it is set, the rotation won't change anymore though. is there a way around having to call the style function every frame?
I am also considering using a heatmap layer for lower zoom levels and then switching to the vector layer as the user zooms in.
it would be great if someone could give me hints for overall performance improvements.

Need to draw arrows and circles on canvas without clearing the whole canvas

I'm trying to draw arrows and circles on a canvas, currently the whole canvas is cleared on mousemove and mousedown or whenever the draw function is called, I am not able to draw multiple arrows and circles. Is there any other to accomplish this task?
heres a fiddle: http://jsfiddle.net/V7MRL/
Stack two canvases on top of each over, and draw the temporary arrows/circle on the one on top, and do the final draw on the canvas below.
This way you can clear the top canvas with no issue, and your draws 'accumulate' in the lower canvas.
http://jsfiddle.net/V7MRL/5/ (updated)
I changed your draw function so it takes a 'final' flag that states wether the draw is final.
For a final draw, lower canvas is used, for an temporary draw, upper canvas is cleared then used.
function draw(final) {
var ctx = final ? context : tempContext ;
if (final == false) ctx.clearRect(0, 0, canvas.width, canvas.height);
Edit : for the issue #markE mentionned, i just handled mouseout event to cancel the draw if one is ongoing :
function mouseOut(eve) {
if ( mouseIsDown ) {
mouseIsDown = 0;
cancelTempDraw();
}
}
with :
function cancelTempDraw() {
tempContext.clearRect(0, 0, canvas.width, canvas.height);
}
Rq that your undo logic is not working as of now. I changed it a bit so that in draw, if the draw is final, i save the final canvas prior to drawing the latest figure to quickly have a 1-step undo. i just created a third temp canvas to/from which you copy.
Rq also that you cannot store one canvas for each stroke, or the memory will soon explode. Store the figures + their coordinates in an array if you want a full undo... or just allow 1-step undo for now.
i updated the jsfiddle link.

Drawing UI elements directly to the WebGL area with Three.js

In Three.js, is it possible to draw directly to the WebGL area (for a heads-up display or UI elements, for example) the way you could with a regular HTML5 canvas element?
If so, how can you get the context and what drawing commands are available?
If not, is there another way to accomplish this, through other Three.js or WebGL-specific drawing commands that would cooperate with Three.js?
My backup plan is to use HTML divs as overlays, but I think there should be a better solution.
Thanks!
You can't draw directly to the WebGL canvas in the same way you do with with regular canvas. However, there are other methods, e.g.
Draw to a hidden 2D canvas as usual and transfer that to WebGL by using it as a texture to a quad
Draw images using texture mapped quads (e.g. frames of your health box)
Draw paths (and shapes) by putting their vertices to a VBO and draw that with the appropriate polygon type
Draw text by using a bitmap font (basically textured quads) or real geometry (three.js has examples and helpers for this)
Using these usually means setting up a an orthographic camera.
However, all this is quite a bit of work and e.g. drawing text with real geometry can be expensive. If you can make do with HTML divs with CSS styling, you should use them as it's very quick to set up. Also, drawing over the WebGL canvas, perhaps using transparency, should be a strong hint to the browser to GPU accelerate its div drawing if it doesn't already accelerate everything.
Also remember that you can achieve quite much with CSS3, e.g. rounded corners, alpha transparency, even 3d perspective transformations as demonstrated by Anton's link in the question's comment.
I had exactly the same issue. I was trying to create a HUD (Head-up display) without DOM and I ended up creating this solution:
I created a separate scene with orthographic camera.
I created a canvas element and used 2D drawing primitives to render my graphics.
Then I created an plane fitting the whole screen and used 2D canvas element as a texture.
I rendered that secondary scene on top of the original scene
That's how the HUD code looks like:
// We will use 2D canvas element to render our HUD.
var hudCanvas = document.createElement('canvas');
// Again, set dimensions to fit the screen.
hudCanvas.width = width;
hudCanvas.height = height;
// Get 2D context and draw something supercool.
var hudBitmap = hudCanvas.getContext('2d');
hudBitmap.font = "Normal 40px Arial";
hudBitmap.textAlign = 'center';
hudBitmap.fillStyle = "rgba(245,245,245,0.75)";
hudBitmap.fillText('Initializing...', width / 2, height / 2);
// Create the camera and set the viewport to match the screen dimensions.
var cameraHUD = new THREE.OrthographicCamera(-width/2, width/2, height/2, -height/2, 0, 30 );
// Create also a custom scene for HUD.
sceneHUD = new THREE.Scene();
// Create texture from rendered graphics.
var hudTexture = new THREE.Texture(hudCanvas)
hudTexture.needsUpdate = true;
// Create HUD material.
var material = new THREE.MeshBasicMaterial( {map: hudTexture} );
material.transparent = true;
// Create plane to render the HUD. This plane fill the whole screen.
var planeGeometry = new THREE.PlaneGeometry( width, height );
var plane = new THREE.Mesh( planeGeometry, material );
sceneHUD.add( plane );
And that's what I added to my render loop:
// Render HUD on top of the scene.
renderer.render(sceneHUD, cameraHUD);
You can play with the full source code here:
http://codepen.io/jaamo/pen/MaOGZV
And read more about the implementation on my blog:
http://www.evermade.fi/pure-three-js-hud/

Resources