Stretching towards a vector - three.js

I am stumped.
I've created a line geometry in the shape of a spring, and I'd like to be able to transform this spring such that one end remains fixed in place while the other follows a vector3 position. The object3D.lookAt() method gets it pointing the right way, but I can't figure out a way to stretch it to that point.
I figure it'd be a matrix4.makeTransform() but I don't know how to calculate the amounts to scale in each direction. I thought it'd be simply dividing the x, y, and z coordinates of the endpoint of my spring and the target point, but that hasn't worked at all.
Is there a method to do this that I've somehow missed?

Your .lookAt call, makes the spring point its own z-axis (hopefully the axis running along the length of the spring! if you've got your mesh set up correctly!)
Once you have it pointed at the target, you need to figure out how far the target is...
using something like
distance = new THREE.Vector3().copy(targetPoint).sub( springObject.position).length()
Once you know that distance, and assuming you know the default length of the spring model, then it should just be a matter of setting the z axis scale of the spring by the distance/defaultLength.. something like:
springObject.scale.z = defaultLength/distance
This could be a little complicated if you are already scaling the spring model by some amount, you'll have to take that into account as well.

Related

Libgdx scene2d multiple rotations around different origins

I have an actor that i need to rotate 2 times. Each time around a different origin. But it seems the actor just saves the origin set with setOrigin() and and the rotation set with setRotation() and calculates it when drawing. So if i simply set these values 2 times its just overwriting the old one and doesnt calculate it when drawing. Is there any way to chain multile rotations around different origins ?
Yes you are right, you will not see the result until you draw your actor, it is because the rotation doesn't make any transformation on actor's coordinates or something else. Rotation just a plain value that uses only when actor draws it's graphics or someone queries for actor BoundingBox e.g. So all the rotation transformations happens each time someone need in it.
Back to your question... If you want to apply several transformations to your actor you should somehow accumulate them and then change actor's state only once.
As solution you may have a look at Group#applyTransform() method, it supplied with Matrix4 where your can flexible configure all your transformations. Of course you will have to place your actor inside Group object, which is a some kind of cons, but in profit you will have a deal with a matrix transformations whose are not available for a plain Actor.
Hope this will helpful, good luck.

drag drawLine jcanvas optimisation coordinates

considering this basic case, one may expect the coordinates of the layer to be updated... but they would not.
Instead, there is the possibility of remembering the starting point, compute the mouse offset and then update the coordinates, like in this test but... the effect is quite extreme.
Expected : point x1,y1 is static
Result : point x1,y1 moves incredibly fast
If setting coordinates to constant, the drag remains the same.
The main problem here is that drag action applies to the whole layer.
Fix : apply the modification at the end of the drag, like in this snippet.
But it is relatively ugly. Anyone has a better way to
get on the run the actual coordinates of the points of the line
manage to keep a point of the line static while the others are moving
Looking forward your suggestions,
In order to maintain the efficiency of dragging layers, jCanvas only offsets the x and y properties for any draggable layer (including paths). Therefore, when dragging, you can compute the absolute positions of any set of path coordinates using something along these lines:
var absX1 = layer.x + layer.x1;
var absY1 = layer.y + layer.y1;
(assuming layer references a jCanvas layer, of course)

Texture2D.Bounds.Intersect, but the Bounds never move? - XNA, .Net 4.0

I am still shiny new to XNA, so please forgive any stupid question and statements in this post (The added issue is that I am using Visual Studio 2010 with .Net 4.0 which also means very few examples exist out on the web - well, none that I could find easily):
I have two 2D objects in a "game" that I am using to learn more about XNA. I need to figure out when these two objects intersect.
I noticed that the Texture2D objects has a property named "Bounds" which in turn has a method named "Intersects" which takes a Rectangle (the other Texture2D.Bounds) as an argument.
However when you run the code, the objects always intersect even if they are on separate sides of the screen. When I step into the code, I noticed that for the Texture2D Bounds I get 4 parameters back when you mouse over the Bounds and the X, and Y coordinates always read "X = 0, Y = 0" for both objects (hence they always intersect).
The thing that confuses me is the fact that the Bounds property is on the Texture rather than on the Position (or Vector2) of the objects. I eventually created a little helper method that takes in the objects and there positions and then calculate whether they intersect, but I'm sure there must be a better way.
any suggestions, pointers would be much appreciated.
Gineer
The Bounds property was added to the Texture2D class to simplify working with Viewports. More here.
You shouldn't think of the texture as being the object itself, it's merely what holds the data that gets drawn to the screen, whether it's used for a Sprite or RenderTarget. The position of objects or sprites and how position/moving is handled is entirely up to you, so you have to track and handle this yourself. That includes the position of any bounds.
The 2D Rectangle Collision tutorial is a good start, as you've already found :)
I found the XNA Creator Club tutorials based on another post to stackoverflow by Ben S. The Collision Series 1: 2D Rectangle Collision tutorial explains it all.
It seems you have to create new rectangles, based on the original rectangles moving around in the game every time you try to run the intersection method, which then will contain the updated X and Y coordinates.
I am still not quite sure why the original object rectangles position can not just be kept up to date, but if this is the way it should work, that's good enough for me... for now. ;-)

Raytracing (LoS) on 3D hex-like tile maps

Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.

Liquify filter/iwarp

I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.

Resources