How to blend colors with background based on distance from camera - opengl-es

So I'm working on a WebGL code in JS that draws approximation of a given function using gl_POINTS, and I'm trying to make those points' colors blend with background color the further from camera they get. So whenever user moves or rotates camera, colors change. I thought of using Blending function, but supposedly I'm supposed to implement that feature mostly in fragment shader.
All I figured out so far is that fragment shader has vec2 gl_FragCoord built-in variable , but that doesn't seem to help much since it's only position on screen, doesn't really keep any z coordinate I could use. Am I missing something?

Related

OpenGL 2d (orthographic) rendering for GUI, the modern way

I want to render GUI components in my OpenGL program. They are still just simple textured (vbo) rectangles.
I would like to get the following things done, the right way.
Drawing using screen coordinates, or at least using a coordinate system that's not based on perspective-like floating points. For example: now the coordinate system is from -1f to 1f (left to right of the screen). It would be more logical to use screen/pixel coordinates.
If it's easy to do, I'd like that the GUI doesn't stretch when the window (viewport) resizes.
I know, previously you could do a lot using the deprecated function glOrtho. But since I want to do it the modern way, which is hopefully also better for performance, I don't know how to start.
After searching on the internet, I came to the conclusion that I have to use a shader. I'm not very familiar with shaders.
And another question: does performance increase when doing this using a shader?
What you do with modern OpenGL is essentially the same as using glOrtho to setup a orthographic projection matrix: Create a transformation (matrix) that maps coordinates 1:1 into viewport coordinates and use that to transform the coordinates.
For example you could create a vec2 uniform viewport and set that to the viewport width/height. Then in the vertex shader you can use that to transform your vertex pixel coordinate positions into the range [-1,1], like this
gl_Position = vec4(2*vpos.xy / viewport.xy - 1, 0, 1);

How can I light emission per vertex and per vertex lighting in ThreeJS?

I want to see a chart with color specified per vertex and to get little bit of shading too.
But if I use MeshBasicMaterial I only get VertexColor with no dynamic shading.
On the other hand, if I use MeshPhongMaterial I just get shading but without emissiveness from my vertex colors.
As the THREE.JS PhongMaterial supports vertexColors, giving you a nice combination of dynamic lighting and vertex colors, I'm not quite sure I understand your question. Perhaps that is something you should investigate more?
However, as an alternative to writing a custom shader you could try rendering your model in multiple passes.
This will not give you as much control over the way the vertex colors and phong lighting are combined as a shader would, but often a simple add/multiply blend can give pretty decent results.
Algorithm:
- create two meshes for the BufferGeometry, one with the BasicMaterial and one with the PhongMaterial
- for the PhongMaterial, set
depthFunc = THREE.EqualDepth
transparent = true;
blending = THREE.AdditiveBlending(or MultiplyBlending)
- render the first mesh
- render the second mesh at the exact same spot

Drawing transparent sprites in a 3D world in OpenGL(LWJGL)

I need to draw transparent 2D sprites in a 3D world. I tried rendering a QUAD, texturing it(using slick_util) and rotating it to face the camera, but when there are many of them the transparency doesn't really work. The sprite closest to the camera will block the ones behind it if it's rendered before them.
I think it's because OpenGL only draws the object that is closest to the viewer without checking the alpha value.
This could be fixed by sorting them from furthest away to closest but I don't know how to do that
and wouldn't I have to use math.sqrt to get the distance? (I've heard it's slow)
I wonder if there's an easy way of getting transparency in 3D to work correctly. (Enabling something in OpenGL e.g)
Disable depth testing and render transparent geometry back to front.
Or switch to additive blending and hope that looks OK.
Or use depth peeling.

How to draw a border around a texture using GLSL

I want to create some textured rectangles (I guess the jargon for this is 'quads" :D) with OpenGL ES 2.0 and move them on screen following mouse pointer.
But now comes the "advanced" part: I want that all these rectangles to have a border around them; I could do this by simply overpainting the texture images in software to draw the borders on top of them and after that pass the modified (sw "bordered") texture data to the shaders; But I want to do this in hardware, in the shaders (either vertex or fragment shader or both).
Is this possible? If yes can someone post the GLSL shaders code for this?
One idea would be to test if either coordinate of the UV is less than 0.1 or greater than 0.9, and then replace the texture texel with a border color if the test is true.

Working with Three.js

Context: trying to take THREE.js and use it to display conic sections.
Method: creating a mesh of vertices and then connect face4's to all of them. Used two faces to produce a front and back side so that when the conic section rotates it won't matter from which angle the camera views it.
Problems encountered: 1. Trying to find a good way to create a intuitive mouse rotation scheme. If you think in spherical coordinates, then it feels like just making up/down change phi and left/right change phi would work. But that requires that you can move the camera. As far as I can tell, there is no way to change actively change the rotation of anything besides the objects. Does anyone know how to change the rotation of the camera or scene? 2. Is there a way to graph functions that is better than creating a mesh? If the mesh has many points then it is too slow, and if the mesh has few points then you cannot easily make out the shape of the conic sections.
Any sort of help would be most excellent.
I'm still starting to learn Three.js, so I'm not sure about the second part of your question.
For the first part, to change the camera, there is a very good way, which could also include zooming and moving the scene: the trackball camera.
For the exact code and how to use it, you can view:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_trackballcamera_earth.html
At the botton of this page (http://mrdoob.com/122/Threejs) you can see the example in action (the globe in the third row from the bottom).
There is an orbit control script for the three.js camera.
I'm not sure if I understand the rotation bit. You do want to rotate an object, but you are correct, the rotation is relative.
When you rotate or move your camera, a matrix is calculated for that position/rotation, and it does indeed rotate the scene while keeping the camera static.
This is irrelevant though, because you work in model/world space, and you position your camera in it, the engine takes care of the rotations under the hood.
What you probably want is to set up an object, hook up your rotation with spherical coordinates, and link your camera as a child to this object. The translation along the cameras Z axis relative to the object should mimic your dolly (zoom is FOV change).
You can rotate the camera by changing its position. See the code I pasted here: https://gamedev.stackexchange.com/questions/79219/three-js-camera-turning-leftside-right
As others are saying OrbitControls.js is an intuitive way for users to manage the camera.
I tackled many of the same issues when building formulatoy.net. I used Morphing Geometries since I found mapping 3d math functions to a UV surface to require v little code and it allowed an easy way to implement different coordinate systems (Cartesian, spherical, cylindrical).
You could use particles instead of a mesh I suppose but a mesh seems best. The lattice material is not too useful if you're trying to understand a surface mathematically. At this point I'm thinking of drawing my own X,Y lines on the surface (or phi, theta lines etc) to better demonstrate cross-sections.
Hope that helps.
You can use trackball controls by which you can zoom in and out of an object,rotate the object,pan it.In trackball controls you are moving the camera around the object.Object still rotates with respect to the screen or renderer centre (0,0,0).

Resources