I am trying to make a pool game with C and GTK+3.0
game is top-view so there are some circles as pool balls.
I've set image to each circle with Gtk.
now somehow I want to show sphere rotation in my 2D game. I guess there is some tricks with changing the image while moving the ball but I don't know exactly what to do!
I just want my circles seems like 3D sphere when they move. have any idea?
Please try the game development subsection: https://gamedev.stackexchange.com/
If the balls have only one color, then what can convey the notion of motion is either the change of lights (better done in 3D), or the rotation of the number on the ball.
Give a look here to see what was done a few years back on the Amiga computer:
http://www.youtube.com/watch?v=zTQIPFBUFIg
Related
We're trying to get a realistic affect of a plane being rolled up into a coil in an animation like a carpet rolling up or toilet paper being rolled onto a cardboard tube.
To two ways that are usually suggested are:
Use a spiral and add a curve modifier to the plane - but this is not an accurate representation because the first roll is the widest diameter and then the coil 'tightens'. That would not be how paper really winds onto a cardboard tube ...
The Cylinder/Plane Trick - Move a cylinder while expanding a plane (so one edge is always under the cylinder and increase, decrease the size of the cylinder. This is a clever way to mimic a ribbon being wound / unwound but out plane is actually a complex model so we wouldn't be able to get away with it.
The current animation we are working on is all in Blender Render but if Blender Cycles was the only way to crack this I would go there! ;)
I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.
A quick introduction:
We're developing a positioning system that works the following way. Our camera is situated on a robot and is pointed upwards (looking at the ceiling). On the ceiling we have something like landmarks, thanks to whom we can compute the position of the robot. It looks like this:
Our problem:
The camera is tilted a bit (0-4 degrees I think), because the surface of the robot is not perfectly even. That means, when the robot turns around but stays at the same coordinates, the camera looks at a different position on the ceiling and therefore our positioning program yields a different position of the robot, even though it only turned around and wasn't moved a bit.
Our current (hardcoded) solution:
We've taken some test photos from the camera, turning it around the lens axis. From the pictures we've deduced that it's tilted ca. 4 degrees in the "up direction" of the picture. Using some simple geometrical transformations we've managed to reduce the tilt effect and find the real camera position. On the following pictures the grey dot marks the center of the picture, the black dot is the real place on the ceiling under which the camera is situated. The black dot was transformed from the grey dot (its position was computed correcting the grey dot position). As you can easily notice, the grey dots form a circle on the ceiling and the black dot is the center of this circle.
The problem with our solution:
Our approach is completely unportable. If we moved the camera to a new robot, the angle and direction of tilt would have to be completely recalibrated. Therefore we wanted to leave the calibration phase to the user, that would demand takings some pictures, assessing the tilt parameters by him and then setting them in the program. My question to you is: can you think of any better (more automatic) solution to computing the tilt parameters or correcting the tilt on the pictures?
Nice work. To have an automatic calibration is a nice challenge.
An idea would be to use the parallel lines from the roof tiles:
If the camera is perfectly level, then all lines will be parallel in the picture too.
If the camera is tilted, then all lines will be secant (they intersect in the vanishing point).
Now, this is probably very hard to implement. With the camera you're using, distortion needs to be corrected first so that lines are indeed straight.
Your practical approach is probably simpler and more robust. As you describe it, it seems it can be automated to become user friendly. Make the robot turn on itself and identify pragmatically which point remains at the same place in the picture.
I have an airplane. I use rectangle for bounding this airplane to detect collision and it works great. When the airplane begin falling down I rotate airplane's texture, but rectangle remains unchanged. I don't know how to rotate it. I need to rotate it with airplane's texture because my shell doesn't collide the airplane's tail and cabine.
How to rotate rectangle or perhaps create polygon shape to wrap all airplane? Any help will be appreciated!
#jellyfication's answer points to raycasting, but a different and also simple approach you could implement is the Separating Axis Theorem. The links below will show you in detail what the algorithm is about and how to implement it. They also have some interactive demos so you get the 'feel' for what the algorithm is doing.
http://www.metanetsoftware.com/technique/tutorialA.html
http://www.sevenson.com.au/actionscript/sat/
http://www.codezealot.org/archives/55 (this one has a lot of code)
http://gamedev.tutsplus.com/tutorials/implementation/collision-detection-with-the-separating-axis-theorem/
Good luck!
Use the polygon class to and draw your bounding Box.
Then within the polygon class there is a method to rotate.
Rotate and move the polygon with the plane.
I'm currently drawing a 3D solar system and I'm trying to draw the path of the orbits of the planets. The calculated data is correct in 3D space but when I go towards Pluto, the orbit line shakes all over the place until the camera has come to a complete stop. I don't think this is unique to this particular planet but given the distance the camera has to travel I think its more visible at this range.
I suspect its something to do with the frustum but I've been plugging values into each of the components and I can't seem to find a solution. To see anything I'm having to use very small numbers (E-5 magnitude) for the planet and nearby orbit points but then up to E+2 magnitude for the further regions (maybe I need to draw it twice with different frustums?)
Any help greatly appreciated...
Thanks all for answering but my solution to this was to draw it with the same matrices that were drawing the planet since it wasn't bouncing around as well. So the solution really is to code better really, sorry.