Is vertex attrubute pointer persistent in OpenGL ES? - opengl-es

Imagine scenario when have two GLSL programms A and B that are called one after another.
If to set uniform variable of the program A it's value stays the same and requires no initialization before every draw call of the program. So it can be considered as a "member" of the program (in terms of OOP).
But what about attribute values or when vertex attribute is set as pointer?

You tagged your question as OpenGL ES and WebGL. In both of those attributes are global. Their state is not connected to any GLSL programs.
You can think of it kind of like this
glState = {
attributes: [
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
...
]
};
gl.enableVertexAttribArray, gl.disableVertexAttribArray, and gl.vertexAtrribPointer effect that global attribute state. More details here.
There is an extension on ES 2.0 and WebGL for Vertex Array Objects. That extensions allows the combine state of all attributes to be set and stored on Vertex Array Objects or "VAO"s. There is still global state as well which for the most part can be considered the default VAO.
You can consider that to work like this
glState = {
defaultVAO = {
attributes: [
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
{ enabled: false, type: gl.FLOAT, size: 4, stride:0, offset: 0, buffer, null, },
...
]
},
currentVAO = defaultVAO
};
Now gl.enableVertexAttribArray, gl.disableVertexAttribArray, and gl.vertexAtrribPointer affect the attributes of currentVAO in the above example. Calling createVertexArrayOES creates a new VAO. Calling bindVertexArrayOES sets currentVAO in the above pseudo code to your new VAO. Calling bindVertexArrayOES with null sets currentVAO back to the default one.
In ES 3.0 VAOs are always available (in other words they are no longer an extension, they're part of the basic set of features.)
NOTE: VAOs are easy to emulate. There's an emulation library here so you can use VAOs anywhere in WebGL. If they are supported the emulation library will use the native ones. If they aren't it will emulate them.

In OpenGL prior to 3.2 Core, attribute bindings are handled by the underlying state machine and are global. This means that they stay the same as long as noone modifies them.
Beginning with OpenGL 3.2 Core Profile these attribute settings are no longer global, but get stored in a VAO (Vertex Array Object). Again each VAO updates it's state when it is active, and stores the last know state when it gets deactivated.
In both cases, attribute bindings do not have any connection to the currently bound shader.

Related

Buffer elements based on its contents with comparer function

In RSJS how to buffer values so buffer will be flushed when next element is different from previous. If elements by some comparator are the same then it should buffer them until next change is detected...
Suppose I have such elements...
{ t: 10, price:12 },
{ t: 10, price:13 },
{ t: 10, price:14 },
{ t: 11, price:12 },
{ t: 11, price:13 },
{ t: 10, price:14 },
{ t: 10, price:15 },
The elements are the same if t property value is the same as previous element t value so at the output I just want such buffers...
[ { t: 10, price:12 }, { t: 10, price:13}, { t: 10, price:14} ],
[ { t: 11, price:12}, { t: 11, price:13} ],
[ { t: 10, price:14 }, { t: 10, price:15 } ]
So in the result I have two elements emited (two buffers each containing the same objects ).
I was trying to use bufferWhen or just buffer but I don't know how to specify closingNotifier in this case because this need to be dependent on elements that are approaching. Anyone can help?
TLDR;
const items = [
{ t: 10, price: 12 },
{ t: 10, price: 13 },
{ t: 10, price: 14 },
{ t: 11, price: 12 },
{ t: 11, price: 13 },
{ t: 10, price: 14 },
{ t: 10, price: 15 }
];
const src$ = from(items).pipe(
delay(0),
share()
);
const closingNotifier$ = src$.pipe(
distinctUntilKeyChanged('t'),
skip(1),
share({ resetOnRefCountZero: false })
);
src$.pipe(bufferWhen(() => closingNotifier$)).subscribe(console.log);
StackBlitz demo.
Detailed explanation
The tricky part was to determine the closingNotifier because, as you said, it depends on the values that come from the stream. My first thought was that src$ has to play 2 different roles: 1) the stream which emits values and 2) the closingNotifier for a buffer operator. This is why the share() operator is used:
const src$ = from(items).pipe(
delay(0),
share()
);
delay(0) is also used because the source's items are emitted synchronously. And since the source would be subscribed twice(because the source is the stream, but also the closingNotifier), its important that both subscribers receive values. If delay(0) was omitted, only the first subscriber would receive the items, and the second one would receive nothing, because it was registered after all the source's items have been emitted. With delay(0) we just ensure that both subscribers(the first one from the subscribe callback and the second one is the inner subscriber of closingNotifier) are registered before the source emits the value.
Onto closingNotifier:
const closingNotifier$ = src$.pipe(
distinctUntilKeyChanged('t'),
skip(1),
share({ resetOnRefCountZero: false })
);
distinctUntilKeyChanged('t'), is used because the signal that the buffer should emit the accumulated items is when an item with a different t value comes from the stream.
skip(1) is used because when the very first value comes from the stream, after the first subscription to the closingNotifier, it will cause the buffered items to be sent immediately, which is not what we want, because it is the first batch of items.
share({ resetOnRefCountZero: false }) - this is the interesting part; as you've seen, we're using bufferWhen(() => closingNotifier$) instead of buffer(closingNotifier$); that is because buffer first subscribes to the source, and then to the notifier; this complicates the situation a bit so I decided to go with bufferWhen, which subscribes to the notifier first and then to the source; the problem with bufferWhen is that it resubscribes the to closingNotifier each time after it emits, so for that we needed to use share, because we wouldn't like to repeat the logic for the first batch of items(the skip operator) when there have already been some items; the problem with share()(without the resetOnRefCountZero option) is that it will still resubscribe each time after it emits, because that's the default behavior when the inner Subject used by share is left without subscribers; this can be solved by using resetOnRefCountZero: false, which won't resubscribe to the source when the first subscriber is registered, after the inner Subject had been previously left without subscribers;

Mapbox/threebox Changing Scale On Existing 3D Object Not Working

First, I'm somewhat new to Mapbox. I've made some fun things work ok, but that doesn't mean I'm doing things the correct/best way. Always happy to learn how to do things better.
I'm trying to set up a page that loads a 3D model and gives you on-screen controls to manipulate the 3D model after it loads.
I've gotten x/y/z movement and rotation to work ok but scale isn't working correctly. I've tried a few different ways (detailed below) and scale just doesn't change.
I started with the standard Mapbox 3D model code example here:
https://docs.mapbox.com/mapbox-gl-js/example/add-3d-model/
And I'm using jcastro76's threebox fork from here:
https://github.com/jscastro76/threebox/
Note: Regardless of what the initial model var options scale is set to, the console.log/.dir shows 0.0262049 but changing that initial scale does affect the initial size of the model. That makes me think I'm reading the wrong scale, but none of the scale setting attempts visibly changed the model's scale either. And doing a console.dir(defaultModel) and looking through the properties, everything that looks scale related is also always set to 0.0262049 including matrix.elements 0/5/10, scale x/y/z,
Any thoughts/comments? Thanks in advance!
Code I'm using....
Adding the 3D object:
map.addLayer({
id: 'custom_layer',
type: 'custom',
renderingMode: '3d',
onAdd: function (map, mbxContext) {
window.tb = new Threebox(
map,
mbxContext,
{
defaultLights: true,
enableSelectingObjects: true,
enableDraggingObjects: true,
enableRotatingObjects: true
}
);
var options = {
obj: 'model.glb',
type: 'gltf',
scale: 1, // I get 0.0262049 later regardless of what this is set to. Models with different initial scale set here work correctly, but still can't change it later
units: 'meters',
rotation: { x: 90, y: 0, z: 0 },
anchor: 'center'
}
tb.loadObj(options, function (model) {
defaultModel = model.setCoords(origin);
defaultModel.addEventListener('ObjectDragged', onDraggedObject, false);
tb.add(defaultModel);
})
},
render: function (gl, matrix) {
tb.update();
}
});
// Attempt #1, scale.set
scale = defaultModel.scale;
console.log('Original scale:');
console.dir(scale);
x: 0.0262049
y: 0.0262049
z: 0.0262049
defaultModel.scale.set(1, 1, 1);
scale = defaultModel.scale;
console.log('New scale:');
console.dir(scale);
x: 0.0262049
y: 0.0262049
z: 0.0262049
I also tried all of these with the same before/after results:
defaultModel.matrix.makeScale(1, 1, 1);
defaultModel.setScale(1);
defaultModel.scale.x = 1;
defaultModel.scale.y = 1;
defaultModel.scale.z = 1;
defaultModel.matrix.scale(1);
defaultModel.matrix.scale(1, 1, 1);
I saw reference to using a THREE.Vector3 object so I tried this, with the same results:
var threeV3 = new THREE.Vector3(
1,
1, // also tried -1 on some
1
);
defaultModel.scale.set(threeV3);
defaultModel.matrix.makeScale(threeV3);
defaultModel.setScale(threeV3);
defaultModel.matrix.scale(threeV3);

How to add objectpicker and Camera to my entity in Qt3D?

I need to render some lines and points that hold some data in a 3d scene.
And points need to be picked by mouse, and then get the data in the point.
I first try to define a class inherited from QQuickFramebufferObject, however i find it is difficult to do mouse picking.
I find that it has the ObjectPicker in Qt3D module, so i want to use Qt3D to deal with my work.
My test code below. I defined my GeometryRenderer Object, and set vertices data(just draw two triangles). Like this:
GeometryRenderer {
id: geometry
geometry: Geometry {
boundingVolumePositionAttribute: position
Attribute {
id: position
attributeType: Attribute.VertexAttribute
vertexBaseType: Attribute.Float
vertexSize: 3
count: 4
byteOffset: 0
byteStride: 6 * 4
name: "position"
buffer: vertexBuffer
}
Attribute {
id: color
attributeType: Attribute.VertexAttribute
vertexBaseType: Attribute.Float
vertexSize: 3
count: 4
byteOffset: 3 * 4
byteStride: 6 * 4
name: "color"
buffer: vertexBuffer
}
Attribute {
attributeType: Attribute.IndexAttribute
vertexBaseType: Attribute.UnsignedShort
vertexSize: 1
count: 6
buffer: indexBuffer
}
}
Buffer {
id: vertexBuffer
type: Buffer.VertexBuffer
data: new Float32Array(...)
}
Buffer {
id: indexBuffer
type: Buffer.IndexBuffer
data: new Uint16Array(...)
}
}
And then define a material object like this:
Material {
id: material
effect: Effect {
techniques: Technique {
graphicsApiFilter {
profile: GraphicsApiFilter.CoreProfile
}
renderPasses: RenderPass {
shaderProgram: ShaderProgram {
vertexShaderCode: loadSource("qrc:/shader/hellotriangle.vert")
fragmentShaderCode: loadSource("qrc:/shader/hellotriangle.frag")
}
}
}
}
}
And then in my root Entity:
Entity {
id: root
components: [
RenderSettings {
activeFrameGraph: colorBuffer
pickingSettings.pickMethod: PickingSettings.TrianglePicking
pickingSettings.pickResultMode: PickingSettings.NearestPick
},
InputSettings { }
]
ClearBuffers {
id: colorBuffer
clearColor: Qt.rgba(0.8, 0.8, 0.8, 0.6)
buffers: ClearBuffers.ColorDepthBuffer
RenderSurfaceSelector {
RenderStateSet {
renderStates: DepthTest {
depthFunction: DepthTest.Less
}
}
}
}
}
It works, rendering two triangle. And I want to add Qt3D Camera and ObjectPicker in my scene, How can i make it?
I find that if i use ForwardRenderer(but i couldn't find a way to render my own lines/points vertices) instead of ClearBuffers, the Camera and ObjectPicker works

Jsplumb - Connectors

Am trying to draw a flowchart. I create divs dynamically and have set a unique 'id' property for each div and connect them using Jsplumb connectors.
I get the source and destination id from database(note that 'id' property for div dynamically created is its ID from database) and store in 'connectors' json. Its format is
Eg:
{[from:A,to:B], [from:A,to:C], [from:B,to:C]}
angular.forEach(connectors, function (connect) {
$scope.connection(connect.from, connect.to);
})
The jsplumb code is as follows
$scope.connection = function (s, t) {
var stateMachineConnector1 = {
connector: ["Flowchart", { stub: 25, midpoint: 0.001 }],
maxConnections: -1,
paintStyle: { lineWidth: 3, stroke: "#421111" },
endpoint: "Blank",
anchor: "Continuous",
anchors: [strt, end],
overlays: [["PlainArrow", { location: 1, width: 15, length: 12 }]]
};
var firstInstance = jsPlumb.getInstance();
firstInstance.connect({ source: s.toString(), target: t.toString() }, stateMachineConnector1);
}
THE PROBLEM:
What i have now is
Here the connector B to C overlaps existing A to C connector.
What i need is to separate the two connections like below
I could not find a solution for this anywhere. Any help? Thanks!
Using anchor perimeter calculates the appropriate position for endpoints.
jsfiddle demo for perimeter
jsPlumb.connect({
source:$('#item1'),
target:$("#item2"),
endpoint:"Dot",
connector: ["Flowchart", { stub: 25, midpoint: 0.001 }],
anchors:[
[ "Perimeter", { shape:"Square" } ],
[ "Perimeter", { shape:"Square" } ]
]
});
Jsplumb anchors
What I suggest you to do, to exactly replicate your schema, would be to set 2 endpoints on on box on A, B and C
A Endpoints should be [0.25, 1, 0, 0, 0, 0] and [0.75, 1, 0, 0, 0, 0]
B and C Endpoints should be [0.25, 0, 0, 0, 0, 0] and [0.75, 0, 0, 0, 0, 0]
It basically works like this (I might be wrong for the 4 last one its been a while but you only need to worry about the x and y)
[x,y,offsetx, offsety, angle, angle]
For the x 0 is the extreme left and 1 extreme right
Same goes for y (0 is top and 1 is bottom).
Take care

Irrlicht Engine: Window pops up and disappears instantly

I wanted to create a simple IrrlichtDevice with IrrlichtEngine, but when I start the application, the window just appears on the screen and then instantly disappears.
My code looks like following:
int main()
{
IrrlichtDevice *device =
createDevice( video::EDT_DIRECT3D9, dimension2d<u32>(640, 480), 16,
false, false, false, 0);
}
(code copied from the HelloWorld tutorial of the documentation)
Try
int main()
{
IrrlichtDevice *device =
createDevice( video::EDT_DIRECT3D9, dimension2d<u32>(640, 480), 16,
false, false, false, 0);
while( device->run() )
{ device->getVideoDriver()->beginScene( true, true, video::SColor( 50, 50, 50, 50) );
device->getVideoDriver()->endScene();
}
}
You have no looping system in place. After you create the device the function immediately ends and everything is cleaned up.
bob2 has the correct answer, I would suggest that you practice making simple c++ applications before diving in the deep end.

Resources