I am rendering my 2D background under the water with opengles. How can I distort my textures over time? I just know to achive this with sin(time) or cos(time). But I'm poor in glsl.I have no idea how to do it. Shoud I changed the x,y coordination over time? How can I avoid move the whole texture repeatedly?
Any help thanks.
You may distort the texture coordinates in hope of achieving this but you will need a few parameters.
For instance you can use a sin or cos function (not much of a difference between them) to distort horizontally by moving the X texture coordinate by a small amount. So you insert for instance an uniform (strength) which should be relative to the texture for instance .1 will distort for a maximum of 10%. Then the idea would be to set X=sin(Y)*strength. Since the Y is in range from 0 to 1 you will need to add another parameter such as density to get "more waves" which should be in range like 20 for instance to get a few waves (change this as you please to test for a nice effect). So then the equation becomes X=sin(Y*density)*strength. Still this will produce a static distorted image but what you want is to move over time so you need some vertical time factor delta which should be changed over time and range between .0 and 2*PI and then the equation is X=sin(Y*density + delta)*strength. On every frame you should increase the delta and if it is larger then 2*PI simply decrease it by 2*PI to get a smooth animation. The value you increase the delta by will control the speed of the effect.
So now you have 3 uniform parameters which you should try to play around to get the desired effect. I hope you find it.
Related
I computed a mesh using SfM techniques and am able to extract a 3D mesh. However, the mesh doesn't have scale as expected with SfM techniques.
To scale the mesh, I am able to generate planes of the with real world scale. E.g.,
I tried to play around with ICP to scale and register the SfM mesh to match the scale of the planes but was not very successful. Could anyone point me in the right direction on how to solve this issue? I would like to scale the SfM mesh to match the real world scale. (I do not need to register the two meshes)
You need to relate some distance in the model to some measurable distance in the physical world. The easiest is probably the camera height above the floor plane. If that is not available, then perhaps the height of the bed or the size of the pillow.
Let's say that the physical camera height is 1.6m and in the model the camera is 800 units of length above the floor plane, then the scale factor you need to apply (to get 1 unit of length = 1 mm) is:
1600
scale_factor = ---- = 2.0
800
I ended up doing this, hope this helps someone or if anyone has a better suggestion, I will take it.
1) I used pyrender to render the two meshes from known poses in two worlds to get exact correspondences
2) I then used procustes analysis to figure out the scaling factor by computing the transformation of one mesh to another. You can procrustes from here
I am able to retrieve a scaling factor that is in acceptable range.
I am writing a small program using three.js.
I have rendered mesh from PLY object. And I want to heat polygons that are close to the mouse position. When I move mouse, all polygons near must smoothly change color to the red, and other polygons must smoothly return to their normal color over time.
I have succeeded in getting mouse position and changing color of the nearest polygons, but I don't know how to solve smooth fading over time for the other polygons.
Should I do it in shader or I should pass any additional data to the shader?
I would do something simple like this (in a timer):
dtemp = Vertex_temp - background_temp;
Vertex_temp -= temp_transfer*dtemp*T/dt;
where temp_transfer=<0,1> is unit-less coefficient will adjust the speed of heat transfer. The dt [sec] is time elapsed (interval of your timer or update routine) and T [sec] is time scale for the temp_transfer coefficient.
So if your mouse is far than let background_temp=0.0 [C] and if not set it to background_temp=255.0 [C] now you can use the Vertex_temp directly to compute color ... using it as Red channel <0,255>
But as you can see this is more suited to do on CPU side instead of in GLSL because you need to update the color VBO each frame using its previous values... In GLSL you would need to encode it into texture or something and render back the new values into another one and then converting it back to VBO that is too complicated... maybe compute shader could do it in single pass but I am not familiar with those.
I have a scene with a single camera and one PlaneBufferGeometry
If I make this plane size 1x1 I get 60fps
If I make this plane size 1000x1000 I get <20fps
Why does this happen? I am drawing the same number of vertices to the screen.
Here is a fiddle showing the problem
Just change the definition of size between 1 and 1000 to observe the problem.
var size = 10000;
//size = 1;
var geometry = new THREE.PlaneBufferGeometry(size, size);
I am adding 50 identical planes in this example. There isn't a significant fps hit with only one plane.
It's definitely normal. A larger plane cover more surface on the screen, thus more pixels.
More fragments are emitted by the rasterisation process. For each one, the GPU will check if it pass the depth test and/or the stencil test. If so, it will invoke the fragment shader for each pixels.
Try to zoom in your 1x1 plane, until it cover the whole screen. Your FPS will drop as well.
#pleup has a good point there, to extend on that a little bit: Even a low-end GPU will have absolutely no problem overdrawing (painting the same pixel multiple times) several times (i'd say something like 4 to 8 times) at fullscreen and still keep it up at 60 FPS. This number is likely a bit lower for webgl due to the compositing with the DOM and browser-UI, but it's still multiple times for sure.
Now what is happening is this: you are in fact creating 50 planes, and not only one. All of them with the same size in the same place. No idea why, but thats irrelevant here. As all of them are in the same place, every single pixel needs to be drawn 50 times, and worst case that is 50 times the full screen-area.
I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.
I am painting a rope. It is a Sprite built using a 16x16 texture that is repeated (using TextureOptions.REPEATING_BILINEAR, to 16 x ropeLength).
The problem is that I need to change the rope length "on the fly" (I am already doing it in onManagedUpdate), but I would like to change also the texture length, and so avoid de "ellastic" effect that happens when changing the sprite length without changing the texture length (the repeating textures are stretched or contracted to match the new sprite size).
I have confirmed that using "this.getTextureRegion().setTextureSize()" has no effect after the Sprite has been created.
Can anybody help me or give some ideas.
You'll need to modify the u/v coordinates of the vertices instead. Unfortunately I don't know how to do that in Andengine. Perhaps it's somewhere "near" the function you use to extend the rope (i.e. modify x/y/z coordinates of vertices). Hope this helps.