how can I implement a slow smooth background scrolling in sdl - scroll

I am trying to implement background scrolling using SDL 2.
As far as I understand one can only move source rectangle by an integer value.
My scrolling works fine when I move it by one every iteration of the game loop.
But I want to move it slower. I tried to move it using this code
moved += speed;
if (moved >= 1.0) {
++src_rect.x;
moved -= 1;
}
Here moved and speed are doubles . I want my background to move something like ten times slower, therefore I set speed to 0.1. It does move ten times slower, but the animation is no longer smooth. It kind of jumps from one pixel to another, which looks and feels ugly when the speed is low.
I am thinking of making my background larger and scrolling it using an integer. Maybe when background is large enough the speed of 1 will seem slower.
Is there a way to scroll not a very large background slowly and smoothly and the same time?
Thanks.

What I would do is have a set of floats that would track the virtual screen position, then you just cast the floats to integers when you actually render, that way you don't ever lose the precision of the floats.
To give you an example, I have an SDL_Rect, I want to move it every frame. I have two floating point variables that track the x and y position of the rect, every frame I would update those x and y positions, cast them to an integer, and then render the rect, EX:
// Rect position
float XPos = 0.0f;
float YPos = 0.0f;
SDL_Rect rect = {0, 0, 64, 64};
// Update virtual positions
XPos += 20.0f * DeltaTime;
YPos += 20.0f * DeltaTime;
// Move rect down and to the right
rect.x = (int)XPos;
rect.y = (int)YPos;
While this doesn't give you the exact result you are wanting, it is the only way that I know of to do this, it will let you delay your movement more precisely without giving you that ugly chunkiness in the movement, it also will let you add stuff like more precise acceleration too. Hope this helps.

Related

Collision detection on enemy with wall when there is none

I am trying to develop basic enemy AI on a simple platformer game after following Shaun Spalding's gamemaker 2 platformer tutorials on youtube. My code is the exact same as his on the tutorial but for some reason, when my enemy detects collision with the wall he turns around as he is suppose to and then detects another collision where there is none, causing him to turn around again.
This is my code:
// Horizontal collision
if (place_meeting(x+hsp, y, oWall)) {
show_debug_message(hsp)
while (!place_meeting(x+sign(hsp), y, oWall)) {
x += sign(hsp); // slows down I think
}
hsp = -hsp;
}
x += hsp;
The -hsp part is where he turns around. Somehow, he is detecting another collision as soon as he does so, even though the value of hsp is inverted. Can anyone possibly point me in the direction of why this may be occuring?
(Value of hsp initialized at 3 and never changed only for inversion).
Is it turning back to the wall after a short while, or is it stuck and is flickering to left and right rapidly? Both could involve that the collision point isn't updating well.
When I face with collision problems, I'll use a crosshair sprite, and draw it at the same position as where it should be colliding. that way I've a visible view of the 'collision point'.
Another cause could be the sprite's origin point, that determines at which position the x and y appears, and that the sprite by turning collides with the wall itself. Keep in mind that the origin point is at the center of it's collision mask, to avoid been stuck in a wall.
EDIT: Another possibility: the collision point still checks inside the sprite.
For that, you could also try using an offset that keeps the collision point away from the sprite collision, but to let that work, you'll need to keep the inverse direction away from your horizontal speed. Something like this:
// Horizontal collision
_offset = 15; //moves the collision point away to check in front of the sprite. value depends on the size of the sprite.
_dir = 1; //the direction, should only be 1 or -1
//hsp should no longer be used to inverse, use a new variable (like _dir) instead
collisionPoint = (hsp + offset) * _dir;
if (place_meeting(x + collisionPoint , y, oWall)) {
show_debug_message(collisionPoint)
while (!place_meeting(x+sign(collisionPoint), y, oWall)) {
x += sign(collisionPoint); // slows down I think
}
_dir = -_dir
}
x += hsp * _dir;

Do I need to move the tiles or the player in a 2d tile world?

I'm currently creating a 2d tile game and I'm wondering if the tiles has to move or the character.
I ask this question because I already have created the "2d tile map" but it is running too slow, and I can't fix it. I tried everything now and the result is that I get 30 fps.
The reason that it is running too slow is because every 1ms with a timer, the tiles are being redrawn. But I can't figure out how to fix this problem.
This is how I make the map :
public void makeBoard()
{
for (int i = 0; i < tileArray.GetLength(0); i++)
{
for (int j = 0; j < tileArray.GetLength(1); j++)
{
tileArray[i, j] = new Tile() { xPos = j * 50, yPos = i * 50 };
}
}
}
Here I redraw each 1ms or higher the tiles and sprites :
private void Wereld_Paint_1(object sender, PaintEventArgs e)
{
//label1.Text = k++.ToString();
using (Graphics grap = Graphics.FromImage(bmp))
{
for (int i = 0; i < tileArray.GetLength(0); i++)
{
for (int j = 0; j < tileArray.GetLength(1); j++)
{
grap.DrawImage(tileArray[i, j].tileImage, j * 50, i * 50, 50, 50);
}
}
grap.DrawImage(player.movingObjectImage, player.xPos, player.yPos, 50, 50);
grap.DrawImage(enemyGoblin.movingObjectImage, enemyGoblin.xPos, enemyGoblin.yPos, 50, 50);
groundPictureBox.Image = bmp;
// grap.Dispose();
}
}
This is the Timer with a specific interval :
private void UpdateTimer_Tick(object sender, EventArgs e)
{
if(player.Update()==true) // true keydown event is fired
{
this.Invalidate();
}
label1.Text = lastFrameRate.ToString(); // for fps rate show
CalculateFrameRate(); // for fps rate show
}
Are you writing the tile implementation yourself? Probably the issue is that at every frame you're drawing all tiles.
2D engines with scrolling tiles should draw tiles on a larger sprite than the screen, then draw that sprite around which is a fast operation (you'd need to specify the language you're using so I can provide some hint on how to actually make that fast - basically an in video memory accelerated blit, but every language has it's way to make it happen)
when the border of this super-sprite is closer to the screen border than a threshold (usually half tile), the larger sprite is redrawn around the current position - but there is no need to draw all the tiles on this! start copying the supersprite on this recentered sprite and you only need to draw the tiles missing from the previous supersprite because of the offset.
As mentioned in the comments your concept is wrong. So here's just a simple summary of how to do this task:
Tile map is static
From functional point of view does not matter if player moves or the map but from performance point of view the number of tiles is hugely bigger then number of players so moving player is faster.
To achieve player centered or follow views you have to move the camera too.
rendering
Repainting every 1ms is insane and most likely impossible on nowadays computers if you got medium complexity of the scene. Human vision can't detect it anyway so there is no point in repainting more that 25-40 fps. The only reason for higher fps needs is to be synchronized with your monitor refreshing to avoid scan line artifacts (even LCD use scan lines refreshing). To have more fps then the refresh rate of your monitor is pointless (many fps players would oppose but our perception is what it is no matter what they say).
Anyway if your rendering took more then 1ms (which is more then likely) then your timer is screwed because it should be firing several times before the first handler even stops. That usually causes massive slowdowns due to synchronisation problems so the resulting fps is usually even smaller then the rendering engine could provide. So how to remedy that?
set timer interval to 20ms or more
add bool _redraw=false
And use it to redraw only when you need to repaint screen. So on any action like player movement, camera movement or turn, animation change set it to true
inside timer event handler call your repaint only if _redraw==true and set it to false afterwards.
This will boost performance a lot. even if your repaint will take more than the timer interval still this will be much much faster then your current approach.
To avoid flickering use Back buffering.
camera and clipping
Your map is most likely much bigger then the screen so there is no point to repaint all the tiles. You can look at the camera as a means to select the right part of your map. If your game does not use rotations then you need just position and may be zoom/scale. If you want rotations then 2D 3x3 homogeneous matrices are the way.
Let assume you got only position (no zoom or rotating) then you can use this transformations:
screen_x=world_x-camera_x
screen_y=world_y-camera_y
world_x=screen_x+camera_x
world_y=screen_y+camera_y
So camera is your camera view position, world is you tile position in map grid and screen is the position on screen. If you got indexes of your tile in map then just multiply them by tile size in pixels to obtain the world coordinates.
To select only visible tiles you need to obtain the corner positions of your screen, convert them into world coordinates, then into indexes in map and finally render only tiles inside rectangle that these points form in your map + some margin of error (for example render 1 tile enlarged rectangle in all directions). This way the rendering will be independent on your map size. This process is called clipping.
I strongly recommend to look at these related QAs:
Improving performance of click detection on a staggered column isometric grid
2D Diamond (isometric) map editor ... read the comments there !!!
The demo in the linked QAs use only GDI and direct pixel access to bitmaps in win32 form app so you can compare performance with your code (they should be similar) and tweak your code until it behaves as should.

Invisible, interactable objects in AS3 -- how to code efficient invisibility?

Alpha invisibility.
I currently define circular regions on some images as "hot spots". For instance, I could have my photo on screen and overlay a circle on my head. To check for interaction with my head in realtime, I would returnOverlaps and do some manipulation on all objects overlapping the circle. For debugging, I make the circle yellow with alpha 0.5, and for release I decrease alpha to 0, making the circle invisible (as it should be).
Does this slow down the program? Is there another way to make the circle itself invisible while still remaining capable of interaction? Is there some way to color it "invisible" without using a (potentially) costly alpha of 0? Cache as bitmap matrix? Or some other efficient way to solve the "hot spot" detection without using masks?
Having just a few invisible display objects should not slow it down that much, but having many could. I think a more cleaner option may be to just handle it all in code, rather then have actual invisible display objects on the stage.
For a circle, you would define the center point and radius. Then to get if anyone clicked on it, you could go:
var xDist:Number = circle.x - mousePoint.x;
var yDist:Number = circle.y - mousePoint.y;
if((xDist * xDist) + (yDist * yDist) <= (circle.radius * circle.radius)){
// mousePoint is within circle
} else {
// mousePoint is outside of circle
}
If you insist on using display objects to set these circular hit areas (sometimes it can be easier visually, then by numbers), you could also write some code to read those display objects (and remove them from being rendered) in to get their positions and radius size.
added method:
// inputX and inputY are the hotspot's x and y positions, and inputRadius is the radius of the hotspot
function hitTestObj(inputA:DisplayObject, inputX:int, inputY:int, inputRadius:int):Boolean {
var xDist:Number = inputX - inputA.x;
var yDist:Number = inputY - inputA.y;
var minDist:Number = inputRadius + (inputA.width / 2);
return (((xDist * xDist) + (yDist * yDist)) =< (minDist * minDist))
}
An alpha=0 isn't all that costly in terms of rendering as Flash player will optimize for that (check here for actual figures). Bitmap caching wouldn't be of any help as the sprite is invisible. There's other ways to perform collision detection by doing the math yourself (more relevant in games with tens or even hundreds of sprites) but that would be an overkill in your case.

Loop through all pixels and get/set individual pixel color in OpenGL?

I wrote a little thingy with Processing that I now would like to make a Mac OS X Screen Saver of. However, diving in to OpenGL was not as easy as I thought it would be.
Basically I want to loop through all pixels on screen and based on that pixels color set another pixels color.
The Processing code looks like this:
void setup(){
size(500,500, P2D);
frameRate(30);
background(255);
}
void draw(){
for(int x = 0; x<width; x++){
for(int y = 0; y<height; y++){
float xRand2 = x+random(2);
float yRand2 = y+random(2);
int xRand = int(xRand2);
int yRand = int(yRand2);
if(get(x,y) == -16777216){
set(x+xRand, y+yRand, #FFFFFF);
}
else if(get(x,y) == -1){
set(x+xRand, y+yRand, #000000);
}
}
}
}
It's not very pretty and nor is it very effective. However, I'd like to find out how to do something similiar with OpenGL. I don't even know where to start.
The basic idea of OpenGL is that you never set the values of individual pixels manually, because that's often too slow. Instead you render triangles and do all kinds of tricks with them, like textures, blending, etc.
In order to freely program what each individual pixel does in OpenGL, you need to use a technique called shaders. And that's not very easy if you haven't done anything similar before. The idea of shaders is that GPU executes them instead of CPU, which results in very good performance and takes the load off from the CPU. But in your case it is probably a better idea to do it with CPU and not with shaders and OpenGL, as that approach is much easier to start with.
I recommend you use a library like SDL (or possibly glfw), which lets you do things with pixels without hardware acceleration. You can still do it with OpenGL too, though. By using the function glDrawPixels. That function draws raw pixel data to the screen. But it's probably not very fast.
So start by reading some tutorials about SDL, for example.
Edit: If you want to use shaders, the difficulty with them (among other things) is that you can't specify coordinates to which set pixel values. And you can't get pixel values directly from the screen either. One way to do it with shaders would be the following:
Set up two textures: texture A and texture B
Bind one of the textures as a target that you render everything to
Bind another one of the textures as an input texture for the shader
Render a full-screen quadrangle using your shader and show the result on the screen
Swap textures A and B so you became to use your previous result as your next input
Render again
If you don't want to go the shader route, try doing all your pixel modification in the CPU on a 2D memory array, then use glDrawPixels every frame to push your pixels to the screen. It won't be very hardware accelerated, but it might be fine for your purposes. Another thing to try is to use glTexImage2D to bind your new pixel data every frame to a texture and then render a textured quad to the entire screen. I'm not sure which will be faster. My advice is to try these things before jumping into the complexity of shaders.
There are a few bugs in your code that make reverse engineering and porting it harder, and make me wonder if you actually posted the correct code. Assuming that the visual effect produced is what you want, here is a more efficient and more correct draw():
void draw() {
loadPixels();
for(int x = 0; x<width/2; x++) {
for(int y = 0; y<height/2; y++) {
int x_new = 2*x+int(random(2));
int y_new = 2*y+int(random(2));
if (x_new < width && y_new < height) {
int dest_pixel = (y_new*width + x_new);
color c = pixels[y*width+x];
if(c == #FFFFFF){
pixels[dest_pixel] = #000000;
}
else {
pixels[dest_pixel] = #FFFFFF;
}
}
}
}
updatePixels();
}
Note that the upper bounds of the loop are divided by two. As you wrote it, 3/4 of your set() calls were for pixels that are beyond the bounds of the window. The extra if is necessary because of the addition of small random values to the coordinates.
The overall effect of this code could be described as an in-place stretch and invert of the image, with a little bit of randomness thrown in. Because it's an in-place transformation, it can't be easily parallelized or accelerated, so you are best off implementing this as bitmap/texture operations on the CPU. You can do this without having to ever read pixels from the GPU, but you will have to push a screen full of pixels to the GPU each frame.
If you use glDrawPixels with a format argument of GL_LUMINANCE and a type argument of GL_UNSIGNED_BYTE, then you can pretty easily convert this code to operate on a byte array, which will keep the memory consumption down somewhat as compared with using 32-bit RGBA values.

Correct calculations of floats in OpenGL ES

I'm making a game in 3D. Everything is correct in my code, although I'm confused about one thing.
When I setting up my perspective (gluPerspective) I set it to zNear = 0.1f and zFar = 100.0f. So far so good. Now, I also wanna move things just in the x or y direction via glTranslate.... But, the origo starts in the absolute centrum of my screen. Like I have zFar and zNear, why isn't that properly to the x and y coordinates? Now it is like if I move my sprite -2.0f to left on x-axis and make glTranslate... to handle that, it almost out of screen. And the z-axis is not behave like that. That's make it a lot more difficult to handle calculations in all directions. It's quite hard to add an unique float value to an object and for now I just add these randomly to just make them stay inside screen.
So, I have problem calculate corrects value to each object. Have I missed something? Should I change or thinkig of something? The reason that this is important is because I need to know the absolute left and right of my screen to make these calculations.
This is my onSurfaceChanged:
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, 45.0f, (float)width / (float)height,
0.1f, 100.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
Thanks in advance!
When you use gluPerspective you are transforming your coordinates from 3D world space into 2D screen space, using a matrix which looks at (0,0,0) by default (i.e. x= 0, y = 0 is in the center of the screen). When you set your object coordinates you are doing it in world space, NOT screen space.
If you want to effectively do 2D graphics (where things are given coordinates respective to their position on the screen you want to use gluOrtho2D instead.

Resources