SkSpriteNodes are stacking on top of each other - skspritenode

I am writing a soccer app which I use an SKScene to display my players and so on. My player nodes are set up with an .sks but I want to add more nodes to the scene when I press a button. The code below is how I add my nodes to the scene and they are displayed,
I am able to drag all the nodes around but when these nodes added in the above function collide with any of the nodes they stick together Like in the image below. The zposition of the player node is 3. I have also to the scene to ignoreSiblingOrder.I have tried implementing physics bodies but that doesn't solve the problem, I have tried changing the zPosition, I have tried adding the nodes under different parents. The current parent of all the nodes is the SKScene.The nodes added in the .sks file do not stack on top each other this only affects the nodes which I have programmatically added. Any ideas on how to stop them from sticking or stacking on top each other?
func addDiagonal(image : UIImage, name : String){
arrowNode = SKSpriteNode()
arrowNode!.texture = SKTexture(image: image)
arrowNode!.size = CGSize(width: 65, height: 65)
arrowNode!.position = CGPoint(x: 153, y: 267)
arrowNode!.zPosition = 4
arrowNode!.name = name
myTrainigArr?.append(arrowNode!)
self.addChild(arrowNode!)
}
This is what happens when the nodes collide or touch each other

Related

Display Mesh On Top Of Another | Remove Overalapping | Render Order | Three.js

I have 2 obj meshes.
They both have some common areas but not completely.
I displayed them both by adding them to screen ..
Just like a mesh on top of another.
But the lower mesh overlaps the top mesh
But what I want to acheive is the lower mesh should always stay below without overlapping and giving the space to the entire top mesh.
I went through this fiddle..Fiddle with renderorder
And I tried something with this like..
var objLoader1 = new OBJLoader2();
objLoader1.load('assets/object1.obj', (root) => {
root.renderOrder = 0;
scene.add(root);
});
var objLoader2 = new OBJLoader2();
objLoader2.load('assets/object2.obj', (root) => {
root.renderOrder = 1;
scene.add(root);
});
But I don't know for what reason the overlap still stays ..
I tried...
var objLoader1 = new OBJLoader2();
objLoader1.load('assets/object1.obj', (root) => {
objLoader1.renderOrder = 0;
scene.add(root);
});
var objLoader2 = new OBJLoader2();
objLoader2.load('assets/object2.obj', (root) => {
objLoader2.renderOrder = 1;
scene.add(root);
});
Then I tried going through this Fiddle .. Another Fiddle
But when I run in I get only the lower or the upper mesh .
But I want to see both without any overlaps..
var layer1 = new Layer(camera);
composer.addPass(layer1.renderPass);
layer1.scene.add(new THREE.AmbientLight(0xFFFFFF));
var objLoader1 = new OBJLoader2();
objLoader1.load('assets/object1.obj', (root) => {
layer1.scene.add(root);
});
var layer2 = new Layer(camera);
composer.addPass(layer2.renderPass);
layer2.scene.add(new THREE.AmbientLight(0xFFFFFF));
var objLoader2 = new OBJLoader2();
objLoader2.load('assets/object2.obj', (root) => {
layer2.scene.add(root);
});
I made the material depthTest to False
But Nothing Helped..
Can anyone help me achieve what I wanted ..
If anyone couldn't figure what I mean by overlapping see the image below..
And Thanks to anyone who took time and effort to go through and help me...
You can use polygonOffset to achieve your goal, which modifies the depth value right before a fragment is written to help move polygons off of eachother without visually changing the position:
material.polygonOffset = true;
material.polygonOffsetUnit = 1;
material.polygonOffsetFactor = 1;
Here is a fiddle demonstrating the solution:
https://jsfiddle.net/5s8ey0ad/1/
Here is what the OpenGL Docs have to say about polygon offset:
When GL_POLYGON_OFFSET_FILL, GL_POLYGON_OFFSET_LINE, or GL_POLYGON_OFFSET_POINT is enabled, each fragment's depth value will be offset after it is interpolated from the depth values of the appropriate vertices. The value of the offset is factor×DZ+r×units, where DZ is a measurement of the change in depth relative to the screen area of the polygon, and r is the smallest value that is guaranteed to produce a resolvable offset for a given implementation. The offset is added before the depth test is performed and before the value is written into the depth buffer.
You're experiencing z-fighting, which is when two or more planes occupy the same space in the depthBuffer, so the renderer doesn't know which one to render on top of the other. Render order alone doesn't fix this because they're both still on the same plane, regardless of which one gets drawn first. You have a few options to resolve this:
Move one of the beams ever so slightly up in the y-axis. A tiny fraction would give one priority over the other, and this distance may not be noticeable to the eye.
I saw your fiddle, and you forgot to add depthTest: false to your material. However, this will cause issues when depth-testing the rest of the shape, since some white is on top of the red, but also some red is on top of the white. The approach in the fiddle works only when it's a simple plane, not more complex geometries.
You can use a boolean operation that removes one shape from the other, like CSG.
I think you'd save yourself a lot of headache by using approach #1.

How to change the source anchor starting position in JsPlumb?

I'm developing an app using jsPlumb. Currently, if I create a shape, it puts the shape in the top left corner of the canvas. And If I connect from source to target point, it is working.
Now, I want to create the shape on the position where I right clicked and connect from source to target. I'm able to create the shape in the position where i right clicked.
But, if I drag from source to target, the source anchor is not starting from the source. Instead, it starts from the top left corner. In the screenshot below, I want to drag from 'Project' to 'No status' shape.
One more thing is, if I drag the 'Project' shape, endpoint is starting from source correctly.
I have figured the problem. Earlier, ,the if block (1) in the code was below the code (2). That's why even thought the shape moved to the new location, the starting location of the anchor didnt move. As soon as I move the if block(1) above (2), jsPlumb understands the shape's location as well as the anchor's starting point location.
So, if you are moving the shape, you should do it before you tell the shapes location to Jsplumb.
if (!!shape.location) { ------------------------------------> 1)
domEl.style.left = shape.location.left + 'px';
domEl.style.top = shape.location.top + 'px';
}
this.jsPlumbInstance.draggable(domEl, {
// to contain the shape within the canvas
// containment: true
});
this.jsPlumbInstance.makeSource(domEl, { ----------------------2)
filter: '.cp',
anchor: ['AutoDefault'],
connectorStyle: {
stroke: '#181919',
strokeWidth: 2,
outlineStroke: 'transparent',
outlineWidth: 4
}
});

How to set vertex colors of THREE.Points object in three.js

I am trying to write a function that creates a point cloud from mesh. I also want to control colors of every vertex of that point cloud. So far I tried to assign colors of geometry but colors doesnt being updated.
InteractablePointCloud_simug=function(object, editor){
var signals=editor.signals;
var vertexSize=0.3;
var pointMat= new THREE.PointsMaterial({size:vertexSize , vertexColors:THREE.VertexColors});
var colors=[];
var colorStep=0.1;
for (var i = 0; i < object.geometry.vertices.length; i++) {
colors.push(new
THREE.Color(colorStep*i,colorStep*i,colorStep*i));
};
//get points from mesh of original object
var points=new THREE.Points(object.geometry,pointMat);
//Update colors
points.geometry.colors=colors;
points.geometry.colorsNeedUpdate=true;
updatePosition();
//Add points object to scene
editor.addNoneObjectMesh(points);
}
I think this is probably doable on other video cards, but mine does not seem to like it.
Theoretically.. if your material color is white.. it should multiply times the vertex color ( which is basically like using the vertex color ), but since you did not specify black as your color, this is not the problem ).
If your code is not working on your computer ( not on mine either ), you will have to go nuclear... and just create a new selectedPointsGeometry and a new selectedPointsMesh
Grab a couple of vertices from the original.. copy them.. put them in a vertices array.. and run an update method ( you have to recreate the geo and mesh every time.. at least on my PC, I tried calling every single update method, and had to resort to recreating )
mind the coffee script. #anchor is the container
updateSelectedVertices: (vertices) ->
if #selectedParticles
#anchor.remove #selectedParticles
#pointGeometry = new THREE.Geometry()
#pointGeometry.vertices = vertices
#selectedParticles = new (THREE.PointCloud)(
#pointGeometry,
#selectedPointMaterial
)
#pointGeometry.verticesNeedUpdate = true
#pointGeometry.computeLineDistances()
#selectedParticles.scale.copy #particles.scale
#selectedParticles.sortParticles = true
#anchor.add #selectedParticles
selectedPointMaterial is defined elsewhere. Just use a different color ( and different size ).. than your non selected point cloud.
IE.. use black and size 5 for non selected point cloud , and use yellow and 10 for the selected one.
My other mesh is called #particles.. and I just have to copy the scale. (this is the non-selected point cloud)
Now my selected points show as yellow

SpriteKit - Selecting Node After Rotation

An SKSpriteNode, named “leaf” is added as a child to another SKSpriteNode, named “allObjects”.
allObjects is set to be the same width and height as the SKView.
I drag the leaf to a location on allObjects and click a distinctive part of its tip and using println get the following in the console.
touchBegan, touch.locationInNode(allObjects): (621.5, 156.75)
touchesEnded, touch.locationInNode: (621.5, 156.75)
touchesEnded, Leaf location: (695.375, 83.25)
touchesEnded, nodeAtPoint(location).name: Optional("leaf")
So far, so good. I can drag the leaf as much as I like at this point with no problem. The important part to note is that the nodeAtPoint is, as expected, ‘leaf’.
However, if I then rotate allObjects, like this:
var rotate = SKAction()
rotate = SKAction.rotateByAngle(0.4, duration: 0)
allObjects.runAction(rotate)
… And then click in the same location on the leaf (visually in IOS Simulator), I get the following in the console. I’m confused as to why I have rotated allObjects and clicked in the same location (and get the same co-ordinates) I am no longer selecting the leaf but missing it by a wide margin (nodeAtPoint shows I am hitting the background).
touchBegan, touch.locationInNode(allObjects): (620.813842773438,
156.470306396484)
touchesEnded, touch.locationInNode: (620.813842773438, 156.470306396484)
touchesEnded, Leaf location: (695.375, 83.25)
touchesEnded, nodeAtPoint(location).name: Optional("allObjects")
Can anyone help?
The node and coordinates used in the locationInNode and nodeAtPoint need to be consistent. In this case, the point returned by locationInNode is in allObjects coordinates while the nodeAtPoint call (i.e., self.nodeAtPoint) requires a point in scene coordinates. To resolve this, you can either replace
nodeAtPoint(location)
with
allObjects.nodeAtPoint(location)
or replace
let location = touch.locationInNode(allObjects)
with this
let location = touch.locationInNode(self)

Multiple views/renders of the same kineticjs model

I am building a graph utility that displays a rather large graph containing a lot of data.
One of the things I would like to be able to support is having multiple views of the data simultaneously in different panels of my application.
I've drawn a picture to try and demonstrate what i mean. Suppose i've built the gradiented image in the background using kinetic.
I'd like to be able to grab show the part outlined in red and the part outlined in green simultaneously, without having to rebuild the entire image.
var stage1 = new Kinetic.Stage({
container: 'container1',
width: somewidth,
height: someheight
});
var stage2 = new Kinetic.Stage({
container: 'container1',
width: someotherwidth,
height: someotherheight
});
var Layer1 = new Kinetic.Layer({
y: someY,
scale: someScale
});
// add stuff to first layer here...
var Layer2 = new Kinetic.Layer({
y: otherY,
scale: otherScale
});
// add other stuff to second layer here...
stage1.add(mapLayer);
stage1.add(topLayer);
stage2.add(mapLayer);
stage2.add(topLayer);
at the point at which I've added my layers to stage1, everything is fine, but as soon as i try to add them to stage2 as well, it breaks down. I'm sifting through the source but I cant see anything forcing data to be unique to a stage. Is this possible? Or do i have to duplicate all of my shapes?
Adding a node into multiple parents is not possible by KineticJS design. Each Layer has <canvas> element. As I know it is not possible to insert a DOM element into document twice.

Resources