Eclipse GEF - Locking ConnectionAnchors to positions on a Figure - eclipse-gef

Is there a method for locking connections in place when connecting to figures. I am using connection anchors and have specified the vertical and horizontal offsets for the anchors in each figure. After connection is established, if the figures are moved, then the position of the connection on the figure is changed.

It turns out that I hadn't modified the getTargetConnectionAnchor(ConnectionEditPart connEditPart) method to pass back the correct anchor, as I had with the getTargetConnectionAnchor(Request request) method.

Related

Rendering to CAMetalLayer from dedicated render thread / loop

In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.

WebAudio changing of orientation of Listener and/or Panner

I am trying to understand how the WebAudio API would work. I have two objects; one representing the listener and one the source. And have used the below link as an example. I am able to move the source and the sound position changes.
https://mdn.github.io/webaudio-examples/panner-node/
The command to change the orientation has been provided: viz this.panner.setOrientation or this.listener.setOrientation. The question I have is: if I have a source or listener object (in canvas mode using ThreeJS viz. we know its position and rotation) how do I change the orientation of either the panner or listener (as the case may be) via JS.
An example would greatly help. Thanks
Any reason not to use THREE's PositionalAudio object? https://threejs.org/docs/index.html#api/audio/PositionalAudio. That lets you add a sound to mesh object, and THREE will take care of moving it. If you want to source a different sound than an AudioBuffer, just connect the audio source to the .gain of the PositionalAudio object.

Persistant object in ADF

I am trying to formulate how to create an ADF, drop an object in it, then have that object always be there when I run the app again, after localization occurs, of course. Do I have to save off the locations of virtual objects into a separate file when the user is done "dropping" objects into the scene and then reload them on subsequent runs? Or is there a way to save them into the ADF?
We cannot save objects with ADF, instead while loading ADF, object can be added (after recognition of ADF) to the recognised coordinate.
I did something like this and got it working, but found placed objects oscillating and not get placed exactly on the same place in the subsequent ADF loading. Because whenever Tango connection is established, that location is considered as origin(0,0,0) and objects get placed related to this origin. So it is hard to see those objects exactly on same places.
There's no good way to save it into ADF, unless you hack some of the ADF's meta data. But hacking meta data is not suggested.
I did what you say.
You have to write coordinates of Objects into a separated file then when you reload scene and recognized your room (thanks to the adf), just put back objects at same coords.
Of course every coord (x y z) must refer to the ADF Tango pose -> base = TangoEnums.TangoCoordinateFrameType.TANGO_COORDINATE_FRAME_AREA_DESCRIPTION
On Unity it's very simple, you just have to check to "true" the "Use Area description poses" on your ARCamera tango script and same on your PointCloud script if you use it also.

Three.js - Under what conditions are THREE.Lines frustum-culled?

I'm drawing a handful of lines (THREE.Line). Under some conditions, the line suddenly disappears from the screen. This happens frequently when one endpoint is far outside the camera's field of view, but the other one is definitely within the field of view. This also happens when the line crosses the camera's field of view, but both endpoints are far outside it.
I can temporarily fix it by setting frustumCulled to false for each line, but this isn't optimal since I might have thousands of lines in the scene.
Is this working as expected?
BTW, I'm using r68. I haven't had time to refactor my app to work with r69. I'm using the WebGLRenderer.
I needed avoiding lines to dissapear too.
Following Justin idea of frustumCulled I had to do
line.frustumCulled = false;
I thought it was
line.geometry.frustumCulled = false;
but I was wrong, you've to apply it on the line not on its geometry.
This works for me on version 0.70

Network node highlighting with Crossfilter

I have a graph with a network and a few histograms.
For the network, each node has a few properties with continuous value. The histograms are for node properties. Is there an easy way to highlight the node in the network, when users brush the histogram? Could I bind a dimension of the network data to the node class attribute "selectednode"?
I have checked dc.js, but it seems not support network graph.
Thanks
Crossfilter isn't really built for highlighting, as filtering will remove the data outside the filter from the view of other dimensions and groups. It sounds like you don't want unselected network nodes to disappear, but rather want nodes with property values falling within the selection to be highlighted. I'd build either your histogram or your network directly based on unfiltered data (not based on the Crossfilter) and then whenever the brush event happens, re-render the network nodes, checking the current brush extent against the property values.
How about two crossfilters built from the same records? The filtering one (cfFilt) would work as expected with dimensions for everything that can be filtered. The highlighting filter (cfHigh) would have one dimension (based on a record id or an identity function, d=>d) that is filtered by inclusion in cfFilt.groupAll(), plus dimensions that filter anything that can be highlighted. (cfFilt().groupAll().reduce() will need to return records, not counts. I can say how in comments if anyone needs to know.)
So cfHigh.groupAll() returns the records that pass through all the filtering and all the highlighting.
An interesting (and otherwise hard to achieve) consequence of this approach is that if you highlight something, then a filter makes that thing disappear, and then that filter is removed and the thing comes back, it will stay highlighted as long as nothing came along to change the highlighting filter in the meantime.

Resources