I have an idea of getting a circle grow in size if anywhere in the circle's area there is this object called "food". Though I have no idea how to implement it on my code. I have tried but of course the whole concept of my idea was off in trying to pull this off.
this.touch = function(){
if (x > this.x && y > this.y){
this.radius += 0.5;
}
}
This is one of my functions in a constructor, the variable x and y is referencing the "food"'s position. The this.(variables) are referring to the object reacting to the food.
The code snippet on top does not work simply for the fact that I am asking the object to increase in size based on x and y positions, and that just doesn't work for my concept.
Can anyone give me some tips or send a link to something which could help.
Thanks in advance!
I'd start by googling something like collision detection for a ton of results.
You can also narrow your search by adding the type of shapes you're talking about. For example, if your food is shown as a point, you might google (point circle collision detection". If your food is shown as a circle, you might google "circle circle collision detection".
If you're dealing with circles, you basically want to check the distance from the center of the circle. If that distance is less than the radius of the circle, then you have a collision. The dist() function will come in handy here.
Shameless self-promotion: here is a tutorial on collision detection. It's written for Processing, but all of the concepts apply to P5.js as well.
Related
I am trying to visualize some data and I require interactivity. I represent the entities that I want to visualize as balls that move like a solar system. in order to gain this I used rotation and translation. However, when I use the distance function to show the name of the entity, it malfunctions and shows the name elsewhere, and the interaction needs to be made somewhere else too unlike what I have in mind. Here is a very simplified version of my code with comments.
//the angle (t) and theta factor as tt
var t=0;
var tt=0.01;
function setup()
{
//creating canvas to darw
createCanvas(600,600);
}
function draw()
{
background(255);
//translating the 0,0 point to the center of the canvas
translate(width/2,height/2);
//applying rotation on the matrix
rotate(1);
//gaining circular movement through sine and cosine oscillation
x=sin(t)*100;
y=cos(t)*50;
//drawing the ball
ellipse(x,y,10,10);
//when the mouse is inside the ball, a text is supposed to appear with the ball that says "on it"
if(dist(mouseX,mouseY,width/2+x,height/2+y)<5)
{
text("on it",x,y);
}
//incrementing the angle
t+=tt;
}
Nothing is malfunctioning. Your problem is caused by the fact that mouseX and mouseY are always in screen space, where your coordinates are in model space after you do the translation and rotate.
You're going to have to project the mouse coordinates into model space. Unfortunately P5.js doesn't have the modelX() and modelY() functions that Processing has, so you're going to have to do this yourself. See George's answer to this question for an excellent guide on exactly how to do that.
The other option I can think of is to do all of your drawing to a P5.Renderer without the rotate or translate, so render space and model space will be the same. Then rotate the whole thing before you draw it. Not sure if that'll work exactly how you want it to, but it's worth investigating. More info can be found in the reference.
Currently, I'm taking each corner of my object's bounding box and converting it to Normalized Device Coordinates (NDC) and I keep track of the maximum and minimum NDC. I then calculate the middle of the NDC, find it in the world and have my camera look at it.
<Determine max and minimum NDCs>
centerX = (maxX + minX) / 2;
centerY = (maxY + minY) / 2;
point.set(centerX, centerY, 0);
projector.unprojectVector(point, camera);
direction = point.sub(camera.position).normalize();
point = camera.position.clone().add(direction.multiplyScalar(distance));
camera.lookAt(point);
camera.updateMatrixWorld();
This is an approximate method correct? I have seen it suggested in a few places. I ask because every time I center my object the min and max NDCs should be equal when their are calculated again (before any other change is made) but they are not. I get close but not equal numbers (ignoring the negative sign) and as I step closer and closer the 'error' between the numbers grows bigger and bigger. IE the error for the first few centers are: 0.0022566539084770687, 0.00541687811360958, 0.011035676399427596, 0.025670088917273515, 0.06396864345885889, and so on.
Is there a step I'm missing that would cause this?
I'm using this code as part of a while loop to maximize and center the object on screen. (I'm programing it so that the user can enter a heading an elevation and the camera will be positioned so that it's viewing the object at that heading and elevation. After a few weeks I've determined that (for now) it's easier to do it this way.)
However, this seems to start falling apart the closer I move the camera to my object. For example, after a few iterations my max X NDC is 0.9989318709122867 and my min X NDC is -0.9552042384799428. When I look at the calculated point though, I look too far right and on my next iteration my max X NDC is 0.9420058636660581 and my min X NDC is 1.0128126740876888.
Your approach to this problem is incorrect. Rather than thinking about this in terms of screen coordinates, think about it terms of the scene.
You need to work out how much the camera needs to move so that a ray from it hits the centre of the object. Imagine you are standing in a field and opposite you are two people Alex and Burt, Burt is standing 2 meters to the right of Alex. You are currently looking directly at Alex but want to look at Burt without turning. If you know the distance and direction between them, 2 meters and to the right. You merely need to move that distance and direction, i.e. right and 2 meters.
In a mathematical context you need to do the following:
Get the centre of the object you are focusing on in 3d space, and then project a plane parallel to your camera, i.e. a tangent to the direction the camera is facing, which sits on that point.
Next from your camera raycast to the plane in the direction the camera is facing, the resultant difference between the centre point of the object and the point you hit the plane from the camera is the amount you need to move the camera. This should work irrespective of the direction or position of the camera and object.
You are playing the what came first problem. The chicken or the egg. Every time you change the camera attributes you are effectively changing where your object is projected in NDC space. So even though you think you are getting close, you will never get there.
Look at the problem from a different angle. Place your camera somewhere and try to make it as canonical as possible (ie give it a 1 aspect ratio) and place your object around the cameras z-axis. Is this not possible?
Hi, I am making a car game where I draw a car shape Rectangle as follows. xP and yP is coming dynamically from the keyboard event in JavaScript and so is the rotation.
ctxDrift.clearRect(0, 0, 426, 754);
ctxDrift.save();
ctxDrift.beginPath();
ctxDrift.translate(xP-car.getWidth()/2, yP-car.getHeight()/2);
ctxDrift.rotate((Math.PI / 180) * car.getRotation());
ctxDrift.translate(-xP, -yP);
ctxDrift.rect(xP-car.getWidth()/2, yP-car.getHeight()/2, car.getWidth(), car.getHeight());
ctxDrift.fillStyle = 'yellow';
ctxDrift.fill();
ctxDrift.restore();
Now there are some obstacles, with Rectangle shape, which are not rotated. Now how could I check the hit between these 2 objects. Or say how to check the rectangle points lies inside another rectangle, if rotated?
Even before you even get started with collision testing:
Canvas does not track where your objects are on the canvas. You must manually keep track of the accumulated .translate() and .rotate() done by the user. You do this by capturing the transformation matrix changes for each user keyboard event. Then you accumulate the transforms into one final transformation matrix that you can use to start hit testing.
From there, the math on collision testing gets quickly complicated!
Your simplest collisiion test is simply to surround each rectangle with a circle and then calculate whether the circle centerpoints are within the sum of the 2 circle radii. The code looks like this:
function CirclesCollide(x1,y1,radius1,x2,y2,radius2){
return ( Math.sqrt( ( x2-x1 ) * ( x2-x1 ) + ( y2-y1 ) * ( y2-y1 ) ) < ( radius1 + radius2 ) );
}
If you want better collision testing and you're willing to wade through LOTS of math, here is a good source of 3 collision tests: http://www.sfml-dev.org/wiki/en/sources/simple_collision_detection
Perhaps the best solution is to use a canvas library like FabricJs which tracks where your objects are on the canvas and provides the hit-testing for you. Easy as this!
var theyAreColliding = myCar.intersectsWithObject(myObstical);
The easiest way is to rotate the rectangle bounding boxes, so they are essentially no longer rotated, before you do the collision check. Then rotate them back before the image is drawn.
Even better, have a bounding box that doesn't rotate which can be used for broad-phase testing (a quick and cheap check to see if you need to then do a narrow-phase check).
This is known as an axis-aligned bounding box, or AABB for short. This greatly simplifies your collision detection code.
update: Found this link that might be useful.
This is what i am looking for this Query
http://www.rgraph.net/blog/2012/october/new-html5-canvas-features.html
canvas have now addHitRegion() function, where we can track easily for this.
New One and best
http://www.playmycode.com/blog/2011/08/javascript-per-pixel-html5-canvas-image-collision-detection/
I have finally added my own logic, which is here
http://jslogic.blogspot.in/2013/02/javascript-bound-rectangle-area-while.html
I have a situation that I'm not realy sure how I can handle. I have an openGl object of about 20k vertices, and I need to offer the user the possibility to select any one of these vertices (let's say with a smallest margin of error possible). Now here is what I want to do in order to do this:
Next to the 3D canvas of the object, I also offer the user 3 'slices' done by the planes x=0; y=0 and z=0. Say for the simplest example for a sphere these would be 3 circles, correponding to 'cutting' out one of the dimensions. Now let's take the z=0 one for the purpose of the example. When the user clicks on a point say (x_circle, y_circle) i would like to get the actual point in the 3d representation where he clicked. The z would be 0 of course but I can't figure out a way to get the x and y. I can easily translate that (x_circle, y_circle) -> (x_screen, y_screen) which would have the same result as a click on the canvas at those coordinates, but I need to find a way to translate that into the (x, y, 0) coordinate in 3D view.
The same thing would need to be done with x=0, y=0 but I think if I can understand/implement a way for z=0 I can just apply more or less the same solution with an added rotation over something. If anyone can help with any examples/code or even math behind this it would help a lot because at the moment I'm not really sure how to proceed.
When the user clicks, you can render the vertices using GL.POINTS (with a certain size, if you like) to an off-screen buffer using a shader that renders each vertex' index into RGBA. Then you read back the pixel on the mouse position and see what index it is.
I'm currently drawing a 3D solar system and I'm trying to draw the path of the orbits of the planets. The calculated data is correct in 3D space but when I go towards Pluto, the orbit line shakes all over the place until the camera has come to a complete stop. I don't think this is unique to this particular planet but given the distance the camera has to travel I think its more visible at this range.
I suspect its something to do with the frustum but I've been plugging values into each of the components and I can't seem to find a solution. To see anything I'm having to use very small numbers (E-5 magnitude) for the planet and nearby orbit points but then up to E+2 magnitude for the further regions (maybe I need to draw it twice with different frustums?)
Any help greatly appreciated...
Thanks all for answering but my solution to this was to draw it with the same matrices that were drawing the planet since it wasn't bouncing around as well. So the solution really is to code better really, sorry.