Render multiple object with different characteristic by OpenGL ES 2.0 - opengl-es

I am a beginner with OpenGL ES 2.0. I have learnt about it for a few weeks. Now I can render multiple objects. However, I have a problem. My question is: How can I render 2 object: one rotates and one does not. When I want rotate a object, I use function esRotate() with modelview_matrix.
Thanks

The simple solution is to call glDrawArrays() or glDrawElements() twice. The first call would be for the model you do want to rotate and the second call for the model you do not want to rotate. Only apply the esRotate() to the model for the first call.
Note that you will also need to call glUniformMatrix4fv() twice to reload the Model-View matrix for each model, with and without the rotation applied.

Related

OpenGL & Multiview

I am trying to implement multiview using OpenGL ES. I have created a vertex shader in that I specified
#extension GL_OVR_multiview : enable
layout(num_views=2) in;
And I am sending 2 MVPs (model view projection matrices).
I have created an FBO with array texture as an attachment. I am binding this FBO and rendering it and everything is working fine and as expected.
Now my question is:
is it possible to render to only one view in one draw call even though I specified 2 views?
I want to render it to a single view per draw call and change the view index in the vertex shader.
Is it possible to change gl_ViewID_OVRin the vertex shader??
Please help me with this.
Thank you.

RealityKit fit model entity within field of view

Using RealityKit I’m trying to position a PerspectiveCamera to fit an ModelEntity within its field of view. Specifically I want the entity to fill the frame as much as possible. Does anyone have a function to do this? I've seen an example in a different environment but am not successful in translating it to RealityKit. I need the function to take all three axis into account when determining the camera distance.
Thanks,
Spiff

Three.js - is there a simple way to process a texture in a fragment shader and get it back in javascript code using GPUComputationRenderer?

I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
As this texture is not meant to change over time, I want to process it only once.
I think that GPUComputationRenderer could do the trick but I don't figure out how and what is the minimal code that can achieve this.
I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
Sounds like you just want to perform basic RTT. In this case, I suggest you use THREE.WebGLRenderTarget. The idea is to setup a simple scene with a full-screen quad and a custom instance of THREE.ShaderMaterial containing your shader code that produces the texture. Instead of rendering to the screen (or default framebuffer), you render to the render target. In the next step, you can use this render target as a texture in your actual scene.
Check out the following example that demonstrates this workflow.
https://threejs.org/examples/webgl_rtt
GPUComputationRenderer is actually intended for GPGPU which I don't think is necessary for your use case.

Unity Mecanim Scripting Dynamic Animation

I have a scene with one fully-rigged model of human body
and I would like to users could make their own animations by transforming the model.
I am talking about an app with time-line and possibility of rotating certain muscles.
Could you please recommend me some way how to do that?
I can imagine storing some information about model state in certain times.. but I do not know how to save it as an animation.
You can't edit animations at runtime. You can create an animation from scratch and use AnimationClip.SetCurve to build up your animation, but you can't access the curves directly at runtime.
However in an editor script you can use the AnimationUtilitys to modify an Animationclip, but of course only in the editor since this is an editor class.
Answer picked from this thread.
I think best possible solution for you is to create many different animations for every body part by your own and let user to choose different combinations of animations (if you are using Animator or Animations). Or you can split model by body parts and let user change transform parameters with iTweens (move from A to B and change angle from C to D). Then u can easily save object's "from" transform and object's "to" transform and use it as animation.

Best way to create an MVC paradigm using OpenGL?

I am beginning to learn about OpenGL development, specifically using Mac OS X and the Cocoa and/or CGL API's, so I will use those classes as examples, but my question is more about design rather than a particular implementation. Here's the situation:
I have a 'scene' object, that can contain or reference the data to render itself, and responds to a 'render' message to draw itself, without any transformations.
I have an NSView or NSOpenGLView object that creates the openGLContext and pixelFormats, resizes the view, and updates the ModelView and Projection based on any transformations that are passed to it. The view object also contains a camera struct that is the basis for the openGL transformations.
I have a controller object that is an NSResponder object, and responds to user inputs.
I don't know if this is the best arrangement; 'model' in this case should be the scene, and I suppose classically the controller should mediate action between the model and view, but it seems odd for the view to send [[controller scene] render] every time it wants to draw the view.
And I am not sure if the best place for the 'camera' is in the view. Should I have the scene object also include the camera looking at it, and have it respond to UI input from the controller, which is currently passed to the view?
Or is this the wrong approach altogether? I am trying to shoehorn something into MVC that really isn't meant for it. I am curious as to what sort of design patterns people out there use with OpenGL.
I see it the way you described. View is designed to control the view point and projection, so keeping a camera reference in it is logical. Scene, on the other side, works as a pure data container for me, not touching the rendering methods, contrary to your model.

Resources