I'm trying to make a Space Invaders type game using images in a 2D array. I'm trying to create the hit detection needed for the bullet from the defender when hitting one of images in a 2D array. The image does not have one colour going across the front so colour detection can't be used. My idea was to just check the x and y coordinates of the bullet and the array using a nested loop.
boolean isHit(){
for(int i=0; i<2; i++){
for(int j=0; j<4; j++){
if(invArray[j][i].x==x && invArray[j][i].y==y){
return true;
}
}
}
return false;
}
George's comment is exactly correct.
I'll just add that you should get into the habit of breaking your problem down into smaller steps and taking on those steps one at a time.
For example, I would start by creating a separate example sketch that just shows a rectangle. Now make it so that rectangle changes color whenever the mouse is inside it. Get that working perfectly before moving on. Then make it so instead of the mouse position, it's a bouncing circle- change the color of the rectangle whenever the circle is inside the rectangle.
When you have that working perfectly, then move on to a 2D array of rectangles. Get that working perfectly, then it should be pretty easy to switch to images instead of rectangles.
Work in small steps, and work on them in isolation without worrying about the rest of your project. Then if you get stuck, you can post a MCVE of just that specific step, and we can go from there. Good luck.
Related
I'm using Cannon.js with Three.js.
I've created a scene which consists of 1 heightfield and 5 balls. I want the balls to roll around the heightfield, using the cannon.js physics.
On mouse move, I rotate the heightfield along the y-axis to make the spheres roll back and forth.
I have an update loop, which copies the sphere position & quaternion from cannon.js and applies to the visual sphere of three.js.
The heightfield is also updated at the same time as the three.js visual floor. Both of these run in a for loop, in requestAnimationFrame.
updateMeshPositions() {
for (var i = 0; i !== this.meshes.length; i++) {
this.meshes[i].position.copy(this.bodies[i].position);
this.meshes[i].quaternion.copy(this.bodies[i].quaternion);
this.hfBody.position.copy(this.mesh.position);
this.hfBody.quaternion.copy(this.mesh.quaternion);
}
}
However, the problem is that when the 'floor' is rotating back and forth, the spheres are getting stuck and sometimes even falling through the floor. Here is an example on codepen - https://codepen.io/danlong/pen/qJwMBo
Move the mouse up and down on the screen to see this in action.
Is there a better or different way I should be rotating the 'floor' whilst keeping the sphere's moving?
Directly (i.e. "instantly") setting position/rotation is likely to break collision handling in all physics engines, including cannon.js . Effectively you are teleporting things through space, causing objects to get stuck in or pass through each other.
What you should do is
Set the velocity (both .velocity and .angularVelocity) or apply forces to the Cannon.js bodies
Copy the transform of those bodies to your visual meshes (notices this is exactly the other way around of what you are currently doing in the code)
Determining the right amount of force to apply to get the desired visual behavior is usually the tricky part.
I drew multiple white pixels on a black canvas to get a night sky. I gave the stars random positions and now want that all pixels move down in-order to imitate the movement of the earth.
I tried Translate but that doesn't seem to work with pixels.
Is there a way to move all the Pixels in the canvas down?
arrayCopy(pixels, 0, pixels, width, (height - 1) * width);
Should solve the problem you have. For more help about arrayCopy look here: https://processing.org/reference/arrayCopy_.html
Basically, the process of creating an animation is this:
Store your state in variables, or in a buffer.
Use those variables to draw your scene every frame.
Change those variables over time to change your scene.
One approach is to draw your stars to a buffer. The createGraphics() function is your friend. Then draw that buffer to the screen using the image() function. Then move the y position of the buffer down by some amount each frame.
Another approach is to store your star positions in a set of variables, such as an ArrayList of PVector instances. Draw those positions to the screen, and move each one down a bit each frame.
The translate() function should work fine for points, and it's just another approach to the steps I outlined above. As is Tobias's answer. There are a bunch of different ways to do this. If you're still having trouble, please post a MCVE in a new question post. Good luck.
Shameless self-promotion: I wrote a tutorial on creating animations in Processing available here.
I am trying to visualize some data and I require interactivity. I represent the entities that I want to visualize as balls that move like a solar system. in order to gain this I used rotation and translation. However, when I use the distance function to show the name of the entity, it malfunctions and shows the name elsewhere, and the interaction needs to be made somewhere else too unlike what I have in mind. Here is a very simplified version of my code with comments.
//the angle (t) and theta factor as tt
var t=0;
var tt=0.01;
function setup()
{
//creating canvas to darw
createCanvas(600,600);
}
function draw()
{
background(255);
//translating the 0,0 point to the center of the canvas
translate(width/2,height/2);
//applying rotation on the matrix
rotate(1);
//gaining circular movement through sine and cosine oscillation
x=sin(t)*100;
y=cos(t)*50;
//drawing the ball
ellipse(x,y,10,10);
//when the mouse is inside the ball, a text is supposed to appear with the ball that says "on it"
if(dist(mouseX,mouseY,width/2+x,height/2+y)<5)
{
text("on it",x,y);
}
//incrementing the angle
t+=tt;
}
Nothing is malfunctioning. Your problem is caused by the fact that mouseX and mouseY are always in screen space, where your coordinates are in model space after you do the translation and rotate.
You're going to have to project the mouse coordinates into model space. Unfortunately P5.js doesn't have the modelX() and modelY() functions that Processing has, so you're going to have to do this yourself. See George's answer to this question for an excellent guide on exactly how to do that.
The other option I can think of is to do all of your drawing to a P5.Renderer without the rotate or translate, so render space and model space will be the same. Then rotate the whole thing before you draw it. Not sure if that'll work exactly how you want it to, but it's worth investigating. More info can be found in the reference.
I just started learning OpenGL and cocos2d and I need an advice.
I'm writing a game in which player is allowed to touch and move rectangles on the screen in a top-down view. Every time a rectangle is touched, it moves up (towards the screen) in z direction and is scaled a bit to look like it's closer than the rest. It drops down to z = 0 after touch ends.
I'd like the risen rectangles to drop shadow under them, but can't get it to work. What approach would you recommend for the best result?
Here's what I have so far.
During setup I turn on the depth buffer and then:
1. all the textures are generated with CCRenderTexture
2. the generated textures are used as an atlas to create CCSpriteBatchNode
3. when a rectangle (tile) is touched:
static const float _raisedScale = 1.2;
static const float _raisedVertexZ = 30;
...
-(void)makeRaised
{
_state = TileStateRaised;
self.scale = _raisedScale;
self.scale = _raisedScale;
self.vertexZ = _raisedVertexZ;
_glowOverlay.vertexZ = _raisedVertexZ;
_glowOverlay.opacity = 255;
}
glow overlay is used to "light up" the rectangle.
After that I animate it using -(void)update:(ccTime)delta
Is there a way to make OpenGl cast the shadow for me using cocos? For example using shaders or OpenGL shadowing. Or do I have to use a texture overlay to simulate the shadow?
What do you recommend? How would you do it?
Sorry for a newbie question, but it's all really new to me and I really need your help
.
EDIT 6th of March
I managed to get sprites with shadow overlay show under the tiles and it looks ok until one tile has to drop shadow on another which has a non-zero vertexZ value. I tried to create additional shadow sprites which would be scaled and shown on top of the other tiles (usually rising or falling down), but I have problems with animation (tile up, tile down).
Why complicate the problem.
Simply create a projections of how the shadow would look like using your favourite graphics editing program and save it as a png. When the object is lifted, insert your shadowSprite behind your lifted object (you can shift it left/right depending on where you think your light source is).
When the user drops the object down, the show can remain under the object and move with it, making it self visible when the item is lifted again.
I have a general question (I know I should present specific code with a problem, but in my case the problem is of a more general nature).
In Processing, let's say I make an ellipse:
ellipse(30, 30, 10, 10);
Now, is there a way to get the pixels where this ellipse is on the canvas? The reason would be to have a way of creating user interaction with the mouse (for instance). So when someone clicks the mouse over the ellipse, something happens.
I thought of turning everything into objects and use a constructor to somehow store the position of the shape, but this is easier said than done, particularly for more complex shapes. And that is what I am interested in. It's one thing to calculate the position of an ellipse, but what about more complex shapes? Are there any libraries?
Check out the geomerative library. It has a way to check whether the mouse is inside any SVG shape. I can't remember off the top of my head but it works something like you make a shape:
myShape = RG.loadShape("shape.svg");
and a point:
RPoint p = new RPoint(mouseX, mouseY);
and the boolean function contains() will tell you if the point is inside the shape:
myShape.contains(p);
It's better to use a mathematical formula than pixel-by-pixel checking of the mouse position (it's much faster, and involves less code).
For a perfect circle, you can calculate the Euclidean distance using Pythagoras' theorem. Assume your circle is centred at position (circleX,circleY), and has a radius (not diameter) of circleR. You can check if the mouse is over the circle like this:
if(sq(mouseX-circleX)+sq(mouseY-circleY) <= sq(circleR)) {
// mouse is over circle
} else {
// mouse is not over circle
}
This approach basically imagines a right-angled triangle, where the hypotenuse (the longest side) runs from the centre of the circle to the mouse position. It uses Pythagoras' theorem to calculate the length of that hypotenuse, and if it's less than the circle's radius then the mouse is inside the circle. (It includes a slight optimisation though -- it's comparing squares to avoid doing a square root, as that can be comparatively slow.)
An alternative to my original mathematical answer also occurred to me. If you can afford the memory and processing power of drawing all your UI elements twice then you can get good results by using a secondary buffer.
The principle involves having an off-screen graphics buffer (e.g. using PGraphics). It must be exactly the same size as the main display, and have anti-aliasing disabled. Draw all your interactive UI elements (buttons etc.) to this buffer. However, instead of drawing them the normal way, give each one a unique colour which it uses for fill and stroke (don't add any text or images... just solid colours). For example, one button might be entirely red, and another entirely green. Any other RGB value works, as long as each item has a unique colour. Make sure the background has a unique colour too.
The user never sees that buffer, so don't draw it to the screen (unless you're debugging or something). When you want to detect what item the mouse is over, just lookup the mouse position on that off-screen buffer. Get the pixel colour at that location, and match it to the UI element.
After you've done all that, just go ahead and draw everything to the main display as normal.
It's worth noting that you can cut-down the processing time of this approach a lot if your UI elements never (or rarely) move. You only need to redraw the secondary buffer when something appears/disappears, animates, or changes size/position.