I've written a home-brew view_port class for a 2D strategy game. The panning (with arrow keys) and zooming (with mouse wheel) work fine, but I'd like the view to also home towards wherever the cursor is placed, as in Google Maps or Supreme Commander
I'll spare you the specifics of how the zoom is implemented and even what language I'm using: this is all irrelevant. What's important is the zoom function, which modifies the rectangle structure (x,y,w,h) that represents the view. So far the code looks like this:
void zoom(float delta, float mouse_x, float mouse_y)
{
zoom += delta;
view.w = window.w/zoom;
view.h = window.h/zoom;
// view.x = ???
// view.y = ???
}
Before somebody suggests it, the following will not work:
view.x = mouse_x - view.w/2;
view.y = mouse_y - view.h/2;
This picture illustrates why, as I attempt to zoom towards the smiley face:
As you can see when the object underneath the mouse is placed in the centre of the screen it stops being under the mouse, so we stop zooming towards it!
If you've got a head for maths (you'll need one) any help on this would be most appreciated!
I managed to figure out the solution, thanks to a lot of head-scratching a lot of little picture: I'll post the algorithm here in case anybody else needs it.
Vect2f mouse_true(mouse_position.x/zoom + view.x, mouse_position.y/zoom + view.y);
Vect2f mouse_relative(window_size.x/mouse_pos.x, window_size.y/mouse_pos.y);
view.x = mouse_true.x - view.w/mouse_relative.x;
view.y = mouse_true.y - view.h/mouse_relative.y;
This ensures that objects placed under the mouse stay under the mouse. You can check out the code over on github, and I also made a showcase demo for youtube.
In my concept there is a camera and a screen.
The camera is the moving part. The screen is the scalable part.
I made an example script including a live demo.
The problem is reduced to only one dimension in order to keep it simple.
https://www.khanacademy.org/cs/cam-positioning/4772921545326592
var a = (mouse.x + camera.x) / zoom;
// now increase the zoom e.g.: like that:
zoom = zoom + 1;
var newPosition = a * zoom - mouse.x;
camera.setX(newPosition);
screen.setWidth(originalWidth * zoom);
For a 2D example you can simply add the same code for the height and y positions.
Related
I'm using the Swiper API. When there's any d3 Chart embedded, the brush receives senseless mouse coordinates, in any case not relative to the container where the click occurs. (That's in fact at least the surrounded svg.)
I'm trying to find a solution but I don't know how I can force d3.brushX to use mouse coordinates which are really relative.
I don't know whether this is a bug or not, it has probably not really to do with the brush itself, rather how the browser pass mouse clicks top-down the DIV's until the SVG will be reached.
Here's the Fiddle.
(just for the annoying code rule:)
// Add brushing
var brush = d3.brushX()
The second slide contains an embedded d3 line chart example, taken from here.
The fiddle works only in Chrome 75+.
Not in Firefox 68+ nor in Edge 44+.
Running the chart example standalone, it works in all available browsers. So I designate this post for Swiper and D3 hopefully to get a hint for an solution.
According to the problem here I found out, that I can change the behavior in the point.js routines an an workaround.
If a D3 chart makes use of an brush **and ** the SVG of the chart element is embedded by a surrounding DIV with an explicite width, mouse clicks will not be interpreted correctly in Firefox or Edge. In Chrome it works perfectly.
I changed the code like this to achieve that it works in FF and Edge, but lose functionality with Chrome:
function reverseTraversal(node,targetTagName) {
var p = node;
while(p.tagName != targetTagName) p = p.parentNode;
return p;
}
function point(node, event) {
var svg = node.ownerSVGElement || node;
if (svg.createSVGPoint) {
var point = svg.createSVGPoint();
var p = reverseTraversal(node,"DIV");
var rect = p.getBoundingClientRect();
point.x = event.clientX + rect.width, point.y = event.clientY;
point = point.matrixTransform(node.getScreenCTM().inverse());
return [point.x, point.y];
}
var rect = node.getBoundingClientRect();
return [event.clientX - rect.left - node.clientLeft, event.clientY - rect.top - node.clientTop];
}
As you can see, I have to traverse backwards until the closest DIV will be reached, get the bounding and add its width to the clientX coordinate.
Without adding the fixed width, the brush is unusable in the particular case.
To get working with all the browsers, maybe a switch is necessary.
It's not a perfect solution, just a workaround for d3.brushX behavior.
I followed this blog for adding tooltip for each point implemented using Three.PointCloud. I used world2Screen to get the location of individual points and tried using this
elem = document.elementFromPoint(x, y)
but continuously only get canvas as the output (and thus tooltip at a fixed position) instead at the clicked/hovered point.
Anyone who has may be implemented this and knows any work around.
Thanks in advance
According to the blog, the document.elementFromPoint() is used to check if the cursor isn’t over the canvas. If you want to capture the hit point of the mouse clicked one, your own raycaster can help in this case. For example:
var x = ((event.clientX - viewer.canvas.offsetLeft) / viewer.canvas.width) * 2 - 1;
var y = -((event.clientY - viewer.canvas.offsetTop) / viewer.canvas.height) * 2 + 1;
var vector = new THREE.Vector3(x, y, 0.5).unproject(this.camera);
this.raycaster.set(this.camera.position, vector.sub(this.camera.position).normalize());
var nodes = this.raycaster.intersectObject( this.pointCloud );
In the above code snippet, the event object is from the mouse clicking event, see here for detail: https://github.com/wallabyway/markupExt/blob/master/docs/markupExt.js#L48
I'm developing a mobile game in Unity3d which the player needs to move a stick that is placed just a little bit higher then the finger with transform.position and block a ball that is moved with Force.Mode2D.impulse. The problem is that the ball goes through the stick if the stick is moved too fast. Could anyone please teach me how to code the stick movement with Force (or any other way that works) that still moves according to the finger position on touch screen ( A.K.A Input.mousePosition) instead of using buttons?
The code goes as such if anyone needs the info;
Stick:
float defencePosX = Mathf.Clamp( Input.mousePosition.x / Screen.width * 5.6f - 2.8f , -2.8f, 2.8f);
float defencePosY = Mathf.Clamp( Input.mousePosition.y / Screen.height * 10 - 4f, -3.3f, -0.5f);
this.transform.position = new Vector3 (defencePosX, defencePosY, 0);
Ball:
projectileSpeed = Random.Range (maxSpeed, minSpeed);
projectileSwing = Random.Range (-0.001f, 0.001f);
rb.AddForce (new Vector2 (projectileSwing * 1000, 0), ForceMode2D.Impulse);
rb.AddForce (new Vector2 (0, projectileSpeed), ForceMode2D.Impulse);
a video of the bug:
https://youtu.be/cr2LVBlP2O0
basicly if i dont move the stick it hits but if i move it fast the ball goes right through. (the bouncing sound effect doesnt work if itss too fast as well)
When working with physics objects, you'll want to use just the Rigidbody component when moving them. Otherwise, it's interpreted as a teleport and no physics is applied and no movement is calculated.
Try using Rigidbody.MovePosition instead of transform.position.
Also, make sure the Rigidbody components on your stick AND ball both have collisionDetectionMode set to 'Continuous Dynamic'. That's how you get small fast-moving physics objects to hit one another in between frames.
float defencePosX = Mathf.Clamp( Input.mousePosition.x / Screen.width * 5.6f - 2.8f , -2.8f, 2.8f);
float defencePosY = Mathf.Clamp( Input.mousePosition.y / Screen.height * 10 - 4f, -3.3f, -0.5f);
rb.MovePosition(new Vector3 (defencePosX, defencePosY, 0));
Id recommend that you set the balls force to Vector3.zero before adding force to it, or that you use the collider of your blocking movement as a bounce pad for the ball.
Please remember to check that your colliders are scaled correctly according to the blocker.
A video displaying your issue would be helpful to understand it better.
I have found a tutorial on parallax scrolling in spritekit using objective-C though I have been trying to port it to swift without much success, very little in fact.
Parallax Scrolling
Does anyone have any other tutorials or methods of doing parallax scrolling in swift.
This is a SUPER simple way of starting a parallax background. WITH SKACTIONS! I am hoping it helps you understand the basics before moving to a harder but more effective way of coding this.
So I'll start with the code that get a background moving and then you try duplicating the code for the foreground or objects you want to put in your scene.
//declare ground picture. If Your putting this image over the top of another image (use a png file).
var groundImage = SKTexture(imageNamed: "background.jpg")
//make your SKActions that will move the image across the screen. this one goes from right to left.
var moveBackground = SKAction.moveByX(-groundImage.size().width, y: 0, duration: NSTimeInterval(0.01 * groundImage.size().width))
//This resets the image to begin again on the right side.
var resetBackGround = SKAction.moveByX(groundImage.size().width, y: 0, duration: 0.0)
//this moves the image run forever and put the action in the correct sequence.
var moveBackgoundForever = SKAction.repeatActionForever(SKAction.sequence([moveBackground, resetBackGround]))
//then run a for loop to make the images line up end to end.
for var i:CGFloat = 0; i<2 + self.frame.size.width / (groundImage.size().width); ++i {
var sprite = SKSpriteNode(texture: groundImage)
sprite.position = CGPointMake(i * sprite.size.width, sprite.size.height / 2)
sprite.runAction(moveBackgoundForever)
self.addChild(sprite)
}
}
//once this is done repeat for a forground or other items but them run at a different speed.
/*make sure your pictures line up visually end to end. Just duplicating this code will NOT work as you will see but it is a starting point. hint. if your using items like simple obstructions then using actions to spawns a function that creates that obstruction maybe a good way to go too. If there are more then two separate parallax objects then using an array for those objects would help performance. There are many ways to handle this so my point is simple: If you can't port it from ObjectiveC then rethink it in Swift. Good luck!
I've been doing some programming for iPhone lately and now I'm venturing into the iPad domain. The concept I want to realise relies on a navigation that is similar to time machine in osx. In short I have a number of views that can be panned and zoomed, as any normal view. However, the views are stacked upon each other using a third dimension (in this case depth). the user will the navigate to any view by, in this case, picking a letter, whereupon the app will fly through the views until it reaches the view of the selected letter.
My question is: can somebody give the complete final code for how to do this? Just kidding. :) What I need is a push in the right direction, since I'm unsure how to even start doing this, and whether it is at all possible using the frameworks available. Any tips are appreciated
Thanks!
Core Animation—or more specifically, the UIView animation model that's built on Core Animation—is your friend. You can make a Time Machine-like interface with your views by positioning them in a vertical line within their parent view (using their center properties), having the ones farther up that line be scaled slightly smaller than the ones below (“in front of”) them (using their transform properties, with the CGAffineTransformMakeScale function), and setting their layers’ z-index (get the layer using the view’s layer property, then set its zPosition) so that the ones farther up the line appear behind the others. Here's some sample code.
// animate an array of views into a stack at an offset position (0 has the first view in the stack at the front; higher values move "into" the stack)
// took the shortcut here of not setting the views' layers' z-indices; this will work if the backmost views are added first, but otherwise you'll need to set the zPosition values before doing this
int offset = 0;
[UIView animateWithDuration:0.3 animations:^{
CGFloat maxScale = 0.8; // frontmost visible view will be at 80% scale
CGFloat minScale = 0.2; // farthest-back view will be at 40% scale
CGFloat centerX = 160; // horizontal center
CGFloat frontCenterY = 280; // vertical center of frontmost visible view
CGFloat backCenterY = 80; // vertical center of farthest-back view
for(int i = 0; i < [viewStack count]; i++)
{
float distance = (float)(i - offset) / [viewStack count];
UIView *v = [viewStack objectAtIndex:i];
v.transform = CGAffineTransformMakeScale(maxScale + (minScale - maxScale) * distance, maxScale + (minScale - maxScale) * distance);
v.alpha = (i - offset > 0) ? (1 - distance) : 0; // views that have disappeared behind the screen get no opacity; views still visible fade as their distance increases
v.center = CGPointMake(centerX, frontCenterY + (backCenterY - frontCenterY) * distance);
}
}];
And here's what it looks like, with a couple of randomly-colored views:
do you mean something like this on the right?
If yes, it should be possible. You would have to arrange the Views like on the image and animate them going forwards and backwards. As far as I know aren't there any frameworks for this.
It's called Cover Flow and is also used in iTunes to view the artwork/albums. Apple appear to have bought the technology from a third party and also to have patented it. However if you google for ios cover flow you will get plenty of hits and code to point you in the right direction.
I have not looked but would think that it was maybe in the iOS library but i do not know for sure.