I am using JUNG to make a network diagram. I want to shape the vertices depending upon its type. The vertices are pickable and colored. The code for vertices so far is as under:
class VertexColors extends PickableVertexPaintTransformer<Number> {
VertexColors(PickedInfo<Number> pi) {
super(pi, Color.blue, Color.yellow);
}
public Paint transform(Number v) {
if (pi.isPicked(v.intValue())) return picked_paint;
return v.intValue()%2==1 ? Color.blue : Color.green;
}
}
I am using the following statement for each vertex:
vv.getRenderContext().setVertexFillPaintTransformer(new VertexColors(vv.getPickedVertexState()));
Now, I cannot find a way to shape the vertices while keeping them pickable and to wrap the vertices around their labels.
All you need is to add another Transformer that provides the vertex shape when it is selected. The Transformer should choose the shape based on the whether the vertex is "picked" or not. To get the picked state, you need to obtain a PickedState object from the visualization. When the selection is changed, the transformer will be asked for the shape and the vertices will be updated with the returned shape. Here is an example of how to do this:
final VisualizationViewer<Integer, String> vv = new
VisualizationViewer<Integer, String>(layout);
// Transformer for cycling the vertices between three unique shapes.
Transformer<Integer, Shape> vertexShape = new
Transformer<Integer, Shape>() {
private final Shape[] styles = {
new Ellipse2D.Double(-25, -10, 50, 20),
new Arc2D.Double(-15, -15, 30, 30, 30, 150, Arc2D.PIE) };
#Override
public Shape transform(Integer i) {
// Choose a shape according to the "picked" state.
PickedState<Integer> pickedState = vv.getPickedVertexState();
int shapeIndex = 0;
if (pickedState.isPicked(i)) {
shapeIndex = 1;
}
return styles[shapeIndex];
}
};
vv.getRenderContext().setVertexShapeTransformer(vertexShape);
Related
I would like to add a floor to an Entity that I created from a RoomPlan USDZ file. Here's my approach:
Recursively traverse the Entity's children to get all of its vertices.
Find the minimum and maximum X, Y and Z values and use those to create a plane.
Add the plane as a child of the room's Entity.
The resulting plane has the correct size, but not the correct orientation. Here's what it looks like:
The coordinate axes you see show the world origin. I rendered them with this option:
arView.debugOptions = [.showWorldOrigin]
That world origin matches the place and orientation where I started scanning my room.
I have tried many things to align the floor with the room, but nothing has worked. I'm not sure what I'm doing wrong. Here's my recursive function that gets the vertices (I'm pretty sure this function is correct since the floor has the correct size):
func getVerticesOfRoom(entity: Entity, _ transformChain: simd_float4x4) {
let modelEntity = entity as? ModelEntity
guard let modelEntity = modelEntity else {
// If the Entity isn't a ModelEntity, skip it and check if we can get the vertices of its children
let updatedTransformChain = entity.transform.matrix * transformChain
for currEntity in entity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
return
}
// Below we get the vertices of the ModelEntity
let updatedTransformChain = modelEntity.transform.matrix * transformChain
// Iterate over all instances
var instancesIterator = modelEntity.model?.mesh.contents.instances.makeIterator()
while let currInstance = instancesIterator?.next() {
// Get the model of the current instance
let currModel = modelEntity.model?.mesh.contents.models[currInstance.model]
// Iterate over the parts of the model
var partsIterator = currModel?.parts.makeIterator()
while let currPart = partsIterator?.next() {
// Iterate over the positions of the part
var positionsIterator = currPart.positions.makeIterator()
while let currPosition = positionsIterator.next() {
// Transform the position and store it
let transformedPosition = updatedTransformChain * SIMD4<Float>(currPosition.x, currPosition.y, currPosition.z, 1.0)
modelVertices.append(SIMD3<Float>(transformedPosition.x, transformedPosition.y, transformedPosition.z))
}
}
}
// Check if we can get the vertices of the children of the ModelEntity
for currEntity in modelEntity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
}
And here's how I call it and create the floor:
// Get the vertices of the room
getVerticesOfRoom(entity: roomEntity, roomEntity.transform.matrix)
// Get the min and max X, Y and Z positions of the room
var minVertex = SIMD3<Float>(Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude)
var maxVertex = SIMD3<Float>(-Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude)
for vertex in modelVertices {
if vertex.x < minVertex.x { minVertex.x = vertex.x }
if vertex.y < minVertex.y { minVertex.y = vertex.y }
if vertex.z < minVertex.z { minVertex.z = vertex.z }
if vertex.x > maxVertex.x { maxVertex.x = vertex.x }
if vertex.y > maxVertex.y { maxVertex.y = vertex.y }
if vertex.z > maxVertex.z { maxVertex.z = vertex.z }
}
// Compose the corners of the floor
let upperLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, minVertex.z)
let lowerLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, maxVertex.z)
let lowerRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, maxVertex.z)
let upperRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, minVertex.z)
// Create the floor's ModelEntity
let floorPositions: [SIMD3<Float>] = [upperLeftCorner, lowerLeftCorner, lowerRightCorner, upperRightCorner]
var floorMeshDescriptor = MeshDescriptor(name: "floor")
floorMeshDescriptor.positions = MeshBuffers.Positions(floorPositions)
// Positions should be specified in CCWISE order
floorMeshDescriptor.primitives = .triangles([0, 1, 2, 2, 3, 0])
let simpleMaterial = SimpleMaterial(color: .gray, isMetallic: false)
let floorModelEntity = ModelEntity(mesh: try! .generate(from: [floorMeshDescriptor]), materials: [simpleMaterial])
guard let floorModelEntity = floorModelEntity else {
return
}
// Add the floor as a child of the room
roomEntity.addChild(floorModelEntity)
Can you think of a transformation that I could apply to the vertices or the plane to align them?
Thanks for any help.
I'm working on a game in LibGDX. Right now, I am working on drawing a line from a moving entity's body current position in the direction that it is moving. Maybe I didn't word that correctly, so here's my very artistic representation of what I'm talking about.
The problem that I'm having is that vertical lines are always much longer than diagonal lines, and diagonal lines are always much longer than horizontal lines. What I'm wanting is for the line being projected from the entity to always be the same length regardless of the direction.
Below is the code used for drawing lines from the center of an entity's body. As you can see, I am scaling the line (e.g., by 25.0f). Maybe there's a formula that I could use to dynamically change this scalar depending on the direction?
public class BodyMovementProjection implements Updatable {
public final Body body;
public final ShapeRenderer shapeRenderer;
public boolean debugProjection = false;
public float scalar = 25.0f;
private final Vector2 posThisFrame = new Vector2();
private final Vector2 posLastFrame = new Vector2();
private final Vector2 projection = new Vector2();
private float[] debugColorVals = new float[4];
public BodyMovementProjection(Body body) {
this.body = body;
this.shapeRenderer = body.entity.gameScreen.shapeRenderer;
} // BodyMovementProjection constructor
#Override
public void update() {
body.aabb.getCenter(posThisFrame);
posLastFrame.set(posThisFrame).sub(body.bodyMovementTracker.getSpritePosDelta());
projection.set(posThisFrame).sub(posLastFrame).scl(scalar).add(posLastFrame);
if (debugProjection) {
shapeRenderer.begin(ShapeRenderer.ShapeType.Line);
shapeRenderer.setColor(debugColorVals[0], debugColorVals[1], debugColorVals[2], debugColorVals[3]);
shapeRenderer.line(posLastFrame, projection);
shapeRenderer.end();
} // if
} // update
public void setDebugColorVals(float r, float g, float b, float a) {
debugColorVals[0] = r;
debugColorVals[1] = g;
debugColorVals[2] = b;
debugColorVals[3] = a;
} // setDebugColorVals
} // BodyMovementProjection
Normalize before you scale.
By normalizing you are making the directional vector 1 unit in length, and by scaling it after by 25 you get a vector that is 25 units every time, regardless of how far apart thw current and previous positions are.
I have a custom map and a map renderer. Inside the renderer, I have this method in Android folder:
public void AddMapPolygon(double[][] polygon, MapResource mapResource)
{
PolygonOptions options = new PolygonOptions();
var points = new LatLng[polygon.Length];
int index = 0;
foreach (double[] loc in polygon)
{
points[index] = new LatLng(loc[0], loc[1]);
index++;
}
options.Add(points);
options.InvokeFillColor(Color.Argb(128, 255, 0, 0));
options.InvokeStrokeColor(Color.Argb(200, 0, 0, 0));
options.InvokeStrokeWidth(4f);
NativeMap.AddPolygon(options);
}
The method is called, but the polygon is not visible on the map.
What I did wrong?
The reason that the polygon didn't appear was that I didn't enclose it in Device.BeginInvokeOnMainThread(). I believe UI related changes should be invoked in Device.BeginInvokeOnMainThread().
i have an node in javafx which are moving in a timeline.
Now i want rotate the node also.
I tried it but everytime the node doesn't hold the path anymore and "fly away". Is there a way to relcate and rotate (I tried it with rotatetransition)?
Edit:
That is my View -Class
public class MyView extends ImageView{
public MyView (Image image) {
super(image);
RotateTransition transition = new RotateTransition();
transition.setCycleCount(Animation.INDEFINITE);
transition.setNode(this);
transition.setDuration(Duration.millis(10));
transition.setFromAngle(5);
transition.setToAngle(5;
transition.setAutoReverse(true);
transition.play(); // */
}
}
On another class i have this:
private void startMoveAnimation(MyView[] views) {
x++;
y++;
this.timeline = new Timeline();
timeline.setCycleCount(Animation.INDEFINITE);
moveEvent = new EventHandler<ActionEvent>() {
#Override
public void handle(ActionEvent event) {
for(View view: views){
view.relocate(x,y);
}
}
}
};
KeyFrame moveKeyFrame = new KeyFrame(Duration.millis(SPEED), moveEvent);
timeline.getKeyFrames().add(moveKeyFrame);
timeline.play(); // */
}
x and y are double values.
Using transforms gives you better control of the order of the transformations. Furthermore some transfroms allow you to specify a pivot point, which is not possible for e.g. the Node.rotate property. Transforms in a list are applied "right to left". (The transform with the highest index is applied first).
The following example shows how to move a rectangle rotating arount its own center (even though the cycle resets to the original position instead of continuously moving in the same direction, but the properties of the transforms can be animated independently in arbitrary ways):
#Override
public void start(Stage primaryStage) {
Rectangle view = new Rectangle(100, 100);
// pivot point = center of rect
Rotate rotate = new Rotate(0, 50, 50);
Translate translate = new Translate();
// rotate first, then move
view.getTransforms().addAll(translate, rotate);
Timeline timeline = new Timeline(
new KeyFrame(Duration.ZERO, new KeyValue(translate.xProperty(), 0d),
new KeyValue(translate.yProperty(), 0d), new KeyValue(rotate.angleProperty(), 0d)),
new KeyFrame(Duration.seconds(2), new KeyValue(translate.xProperty(), 300d),
new KeyValue(translate.yProperty(), 500d), new KeyValue(rotate.angleProperty(), 360d)));
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
Pane root = new Pane(view);
Scene scene = new Scene(root, 500, 500);
primaryStage.setScene(scene);
primaryStage.show();
}
That's what I would try first:
Put the node into a Group. Add the Group to the parent instead of the node. Do the translate animation on the Group and do the rotate animation on the node itself. This is just a first guess which I cannot try out because you haven't provided a minimal reproducible example.
I have a hitResult relative to a detected plane:
arFragment.setOnTapArPlaneListener(
(HitResult hitResult, Plane plane, MotionEvent motionEvent) -> {...})
I want to anchor a model to the plane and have it always face up (the ceiling):
Anchor anchor = plane.createAnchor(plane.getCenterPose());
AnchorNode anchorNode = new AnchorNode(anchor);
anchorNode.setRenderable(model);
The problem is that the model is sometimes randomly rotated. Sometimes, it does not point the ceiling: it is rotated 180 degrees, 90 degrees, or random.
(At least, all this in the emulator).
You can use something like
boolean isVerticalPlane = plane.getType() == Plane.Type.VERTICAL;
inside your setOnTapArPlaneListener from your ArFragment.
Final code will be:
ArFragment arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.ux_fragment);
arFragment.setOnTapArPlaneListener(new BaseArFragment.OnTapArPlaneListener() {
#Override
public void onTapPlane(HitResult hitResult, Plane plane, MotionEvent motionEvent) {
boolean isVerticalPlane = plane.getType() == Plane.Type.VERTICAL;
}
});