Rendering a viewport to a sprite's texture only gets one quarter of the rendered image. In Godot - render

In the right side I have the Viewport object that renders the OriginalImage.
Then in the left side I have a Sprite whose texture is the output of the Viewport object.
When running the Scene, I only get one quarter of the image, no matter how I place or offset Viewport, Camera, Sprites, etc.
For more information, please check the repo and of course pull requests are welcome:
https://github.com/Drean64/godot-viewport

Offset'd the Camera to the center of the viewport, because it was otherwise centered at the viewport's origin.
https://github.com/Drean64/godot-viewport/commit/ccfd095eb4bc23e2c5f385a35921280b878ee3d7

Related

User painting on a canvas within A-Frame

I have an A-Frame scene that contains, among others, a <canvas> element that is the material source for a 3D scene object. I can paint on the canvas programmatically, and it shows up as texture. So far, so good.
However, I'd now also like to enable the user to paint something on the canvas using the controllers. I have added two raycasters/controls:
<a-entity laser-controls="hand: left" raycaster="objects: table2"></a-entity>
<a-entity laser-controls="hand: right" raycaster="objects: table2"></a-entity>
And on the table2 object, I have added a raycaster-listen mixin as described in https://aframe.io/docs/1.3.0/components/raycaster.html#listening-for-raycaster-intersection-data-change.
This works in so far as I get the console log entries with the world coordinates of the intersection point, but I'm absolutely stuck at how to get from the world coordinates back to the canvas coordinates I need to actually paint in the right spot.
In addition, it seems no canvas draw commands I issue in the raycaster-listen tick callback actually have any visible effect (regardless of coordinates).
Any hints appreciated!
As usual, I figured it out the next day 😉
[...] I'm absolutely stuck at how to get from the world coordinates back to the canvas coordinates I need to actually paint in the right spot.
Solution found at https://discourse.threejs.org/t/convert-camera-frustrum-to-uv-coordinate-on-texture/16791/2 - just use intersection.uv which actually contains the normalized texture coordinates of the intersection point. Scale by canvas width/height and you're done.
[...] it seems no canvas draw commands I issue in the raycaster-listen tick callback actually have any visible effect.
Solution found at aframe not rendering lottie json texture mapped to canvas but works in three.js - set texture.needsUpdate = true; in the tick callback after drawing on the canvas.

Creating a magnifying-glass effect in three.js WebGL

I'm working with an orthographic view in three.js/WebGL renderer, and I want a magnifying glass that tracks with the user mouse. I'm looking for the best way of doing this that's efficient.
When working with html5 canvas raw commands, this was easy: I simply defined a circular clip region, zoomed my coordinates, and re-drew the whole scene. With 3d objects, it's less obvious how do to it.
The method I've found so far is to do the following:
Define a second camera that looks into the zoomed region. Set the orthographic clip coordinates to be small so that it doesn't need to do much work
Create a THREE.WebGLRenderTarget
Tell the renderer and all my line textures that the resolution is about to change
Render the scene into the RenderTarget
Add a CircleGeometry as a MeshObject at the spot at the mouse position (in world coordinate but above the rest of the scene, close to the camera). Call this the lens.
Give the lens the WebGLRenderTarget as a texture.
Go back to my default camera, reset all my resolution parameters, and redraw the scene with the 'lens' object added.
This works (see image below) but I'm worried about parts of it:
I have to render twice per frame
Lines don't draw well, because the resolution problems. I have to keep track of all materials that need to know screen resolution and update all of them twice per screen render.
Related problems:
I want to overlay some plot axes on top of this, and possibly gridlines. These would change as the view pans. I'm not sure if I should make these 3d objects, or do it in a 2d canvas context I lay overtop.
I want to overlay some plot lines, and have them show up sensibly in the zoomed view. "Sensible" here is hard to figure out: I don't want them too fat in the zoomed view, but I also don't want to scale them up as much as the image detail (which is being rendered as a texture onto Plane objects behind).
This is a long post, but I'm still new to three.js and looking for good ideas.

How to offset target position in ThreeJS controls

I am using ThreeJS's OrbitControls so that when an object in my scene is clicked, the camera travels close to it and and starts orbiting around it. I'm just moving the controls.target position, camera position and setting controls.autoRotate = true.
The clicked object gets centered on screen, which is nice, but sometimes I need to show a text covering up to 50% of the bottom area of the screen, and then the selected objects gets hidden by it. So, I'd need to somehow offset the rotation center up a bit.
Perhaps another way of asking this is that I need to change the center of rotation so that it is NOT the center of the screen (or the center of the renderer canvas)
I've tried moving the target up but, of course, then the camera doesn't orbit around the selected 3D object but around an empty space close to it. Any idea on how to proceed?
Many thanks!
I finally got the desired results following the comments in this other thread:
by using camera.setViewOffset

getting sprites to work with three.js and different camera types

I've got a question about getting sprites to work with three.js using perspective and orthogonal cameras.
I have a building being rendered in one scene. At one location in the scene all of the levels are stacked on top of each other to give a 3D view of the building and an orthogonal camera is being used to view it. In another part of the scene, I have just the selected level of the building being shown and a perspective camera is being used. The screen is divided between the two views. The idea being the user selects a level from the building view and a more detailed map of that selected level is shown on the other part of the screen.
I played around with sprites for a little bit and as far as I understand it; if the sprite is being viewed with a perspective camera then the sprite's scale property is actual it's size property and if a sprite is being viewed with an orthogonal camera the scale property scales the sprite according to the view port.
I placed the sprite where both cameras can see it and this seems to be the case. If I scale the sprite by 0.5, then the sprite takes up half the orthogonal camera's view port and I can't see it with the perspective camera (presumably because for it, the sprite is 0.5px x 0.5px and is either rounded to 0px (not rendered, or 1px, effectively invisible). If I scale the sprite by say 50, the the perspective camera can see it (presumably because it's a 50px x 50px square) and the orthogonal camera is over taken by the sprite (presumably because it's being scaled by 50 times the view port).
Is my understanding correct?
I ask because in the scene I'm rendering, the building and detailed areas are ~1000 units apart on the x-axis. If I place a sprite somewhere on the detail map I need it to be ~35x35 pixels and when I do this it works fine for the detail view but building view is overtaken. I played with the numbers and it seems that if I scale the sprite by 4, it starts to show up on my building view, even though there's a 1000 unit distance between the views and the sprite isn't visible with the perspective camera.
So. If my understanding is correct then I need to either use separate scenes; have a much bigger gap between views; use the same camera type for both views; or not use sprites.
There are basically two different ways you can use sprites, either with 2D screen coordinates or 3D scene coordinates. Perhaps scene coordinates are what you need? For examples of both, check out the example at:
http://stemkoski.github.io/Three.js/Sprites.html
and in particular, when you zoom in and zoom out in that demo, notice that the sprites in-scene will change size, while the others do not.
Hope this helps!

Changing the view "center" of an html5 canvas

If have an image which is 1024x1250 and a canvas element that is 600x800, I can draw the image to the canvas centered such that the canvas is essentially a smaller view port of the larger image. I then want to allow that center point to move, thus creating the illusion that the viewport is viewing a different portion of the image.
Right now I've done this in sort of a hokey way where I redraw the portion of the image I want to see to the canvas, but I get this feeling that this isnt optimal. Is there a way to render the whole image to the canvas and then somehow "transform" my current center point so this view shift happens behind the scenes hopefully in some native layer?
You can add transformations to the context before drawing any image (rotation, scaling, translation...). What you need is the function context.translate(x,y).
Then, you only need to draw your image at (0,0)
For example, to display the bottom right portion of your image:
ctx.translate (-424, -450);
ctx.drawImage (image, 0, 0);
You can check this link https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Canvas_tutorial/Transformations to see a lot of examples on context transformation.

Resources