Scrolling parallax background, infinitely repeated in libgdx - scroll

I'm making a 2D sidescrolling space shooter-type game, where I need a background that can be scrolled infintely (it is tiled or wrapped repeatedly). I'd also like to implement parallax scrolling, so perhaps have one lowest background nebula texture that barely moves, a higher one containing far-away stars that barely moves and the highest background containing close stars that moves a lot.
I see from google that I'd have each layer move 50% less than the layer above it, but how do I implement this in libgdx? I have a Camera that can be zoomed in and out, and in the physical 800x480 screen could show anything from 128x128 pixels (a ship) to a huge area of space featuring the textures wrapped multiple times on their edges.
How do I continuosly wrap a smaller texture (say 512x512) as if it were infinitely tiled (for when the camera is zoomed right out), and then how do I layer multiple textures like these, keep them together in a suitable structure (is there one in the libgdx api?) and move them as the player's coords change? I've looked at the javadocs and the examples but can't find anything like this problem, apologies if it's obvious!

Hey I am also making a parrallax background and trying to get it to scroll.
There is a ParallaxTest.java in the repository, it can be found here.
this file is a standalone class, so you will need to incorporate it into your game how you want. and you will need to change the control input since its hooked up to use touch screen/mouse.
this worked for me. as for repeated bg, i havent gotten that far yet, but i think you just need to basic logic as in, ok one screen away from the end, change the first few screens pos to line up at the end.

I have not much more to say regarding to the Parallax Scrolling than PFG already did. There is indeed an example in the repository under the test folder and several explanations around the web. I liked this one.
The matter with the background is really easy to solve. This and other related problems can be approached by using modular algebra. I won't go into the details because once shown is very easy to understand.
Imagine that you want to show a compass in your screen. You have a texture 1024x16 representing the cardinal points. Basically all you have is a strip. Letting aside the considerations about the real orientation and such, you have to render it.
Your viewport is 300x400 for example, and you want 200px of the texture on screen (to make it more interesting). You can render it perfectly with a single region until you reach the position (1024-200) = 824. Once you're in this position clearly there is no more texture. But since it is a compass, it's obvious that once you reach the end of it, it has to start again. So this is the answer. Another texture region will do the trick. The range 825-1023 has to be represented by another region. The second region will have a size of (1024-pos) for every value pos>824 && pos<1024
This code is intended to work as real example of a compass. It's very dirty since it works with relative positions all the time due to the conversion between the range (0-3.6) to (0-1024).
spriteBatch.begin();
if (compassorientation<0)
compassorientation = (float) (3.6 - compassorientation%3.6);
else
compassorientation = (float) (compassorientation % 3.6);
if ( compassorientation < ((float)(1024-200)/1024*3.6)){
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 200, 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth(), 32 * (float)1.2);
}
else if (compassorientation > ((float)(1024-200)/1024*3.6)) {
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 1024 - (int)(compassorientation/3.6*1024), 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , 32 * (float)1.2);
compass2.setRegion(0, 0, 200 - compass1.getRegionWidth(), 16);
spriteBatch.draw(compass2, compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth() - (compass1.getRegionWidth()/200f * Gdx.graphics.getWidth()) , 32 * (float)1.2);
}
spriteBatch.end();

You can use setWrap function like below:
Texture texture = new Texture(Gdx.files.internal("images/background.png"));
texture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
It will draw background repeatedly! Hope this help!

Beneath where you initialize your Texture for the object. Then beneath that type in this
YourTexture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
Where YourTexture is your texture that you want to parallax scroll.
In Your render file type in this code.
batch.draw(YourTexture,0, 0, 0 , srcy, Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());
srcy +=10;
It is going to give you an error so make a variable called srcy. It is nothing too fancy.
Int srcy

Related

Cocos2d v3 repeat texture

I was wondering how I should repeat a texture in Cocos2d 3. I have a background and I want to “tile” it across the screen. I have found this which is using ccTexParams with GL_REPEAT but those have been made private in version 3 of cocos.
I have found another solution which can be found here it creates a loop and positions a new child node based on the size of the texture and the size you want. But is that performant? Because when you have a 1px wide background texture and want to repeat that on a iPad retina, you have more than 2000 child nodes.
What is the best way to repeat a texture.
Well since there was no method for repeating without having a POT texture I made something of my own which takes care of it.
Might be useful for someone who has this same question. The code can be found here on Github.
CCTexture2D class has setTexParameters: method to set repeat mode. Also pay attention that the texture must have power-of-two width and height, otherwise repeating mode will be disabled.

What's a good way to optimise rendering a 2D tile game in XNA?

EDIT: I've opted for the second approach as I got 150+ fps even when all 3 tile layers fill the entire screen.
EDIT 2: I read a lot about vertex buffer objects and how they would be great for static geometry and although I still have no idea how to turn my 2D tiles into a VBO and store it on the GPU memory, it definitely seems like the way to go if anyone else is looking for a fast way to render static geometry/quads.
I'm making a game like Super Meat Boy and was wondering if it would be better/faster to store level tiles in an array list and do a camera bounds overlap test to see if it should be rendered.
foreach(Tile tile in world.tiles) {
if(Overlap(camera.bounds, tile))
render(tile);
}
Or would a 2D array storing every grid square and only reading off between camera bounds be better?
int left = (int)(camera.position.x - camera.width/2);
int right = (int)(camera.position.x + camera.width/2) + 1;
int top = (int)(camera.position.y - camera.height/2); // WHY XNA DO YOU UPSIDE DOWN!!!
int bottom = (int)(camera.position.y + camera.width/2) + 1;
for(int x = left; x < right; x++) {
for(int y = top; y < bottom; y++) {
render(world.tiles[x][y]);
}
}
The camera can fit 64*36 tiles on screen which is 2300 odd tiles to read off using the latter approach but is doing an overlap test with every tile in the level any better? I read an answer about joining matching adjacent tiles into a larger quad and just repeating the texture (although I'm using a texture atlas so I'm not sure how to repeat a region on a texture).
Cheers guys.
As per my past experience I can share the details. In 2D map, normally map is like 0 - N long. Now N is far longer then screen size. Now at first I tried to loading everything at once. But it is bit of a overhead. Ok, it was like very much of a overhead. And I endup with 0 FPS. As I want different kind of object for me. So, even repeating same object and saving memory is not working. Then I tried bounding things with reference to screen. So, objects are there and but they are not getting rendered. So, it is moved from away from draw pipe line. And game back to life.
Now, for further performance with C# 4.0 I can use TPL and async and await with draw. It is like better version of threading. So, you can throw stuff there and let it be render at will.
Here is deal wiht XNA or any kinda graphics library. There is complete graphics rendering pipeline. And that makes things whole lot slow. Specifically if PC is old and only have 64MB graphics card to support only wide screen. Your game will be deployed to any kinda machine right??!!
So, if I explain in language of XNA, update is simple code and run it as fast as it can, there is nothing to stop it. But draw is has complete pipe line ahead of it. And that is sole reason of having begin and end. So, after end it can start pushing things to pipe line. [Here] (http://classes.soe.ucsc.edu/cmps020/Winter11/readings/hlsl.pdf) article for reference.
So, here is a deal rendering pipeline is needed but there is no need that is should be slow and blocking. Just make it multi-threaded and things will quite faster for you. If you want more terse then you have to use C# at it fullest including Linked list and stuff. But that will be like last stage.
I hope I have given enough details to provide you an answer. Please let me know if any further details needed.

Rendering very large image from scratch then splitting it into tiles (for Google Maps)

I have a large set of google maps api v3 polylines and markers that need to be rendered as transparent PNG's (implemented as ImageMapType). I've done all the math/geometry regarding transformations from latLng to pixel and tile coordinates.
The problem is: at the maximum allowable zoom for my app, that is 18, the compound image would span at least 80000 pixels both in width and height. So rendering it in one piece, then splitting it into tiles becomes impossible.
I tried the method of splitting polylines beforehand and placing the parts into tiles, then rendering each tile alone, which up until now works almost fine. But it will become very difficult when I will need to draw stylized markers / text and other fancy stuff, etc.
So far I used C# GDI+ as the drawing methods (the ol' Bitmap / Graphics pair).
Many questions here are about splitting an already existing image, storing, and linking it to the API. I already know how to do that.
My problem is how do I draw the initial very large image then split it up? It doesn't really need to be a true image/bitmap/call it whatever you want solution. A friend suggested me to use SVG but I don't know any good rendering solutions to suit my needs.
To make it a little easier to comprehend, think it in terms of input/output. My input is the data that I need to draw (lines, circles, text, etc) that spreads across tens of thousands of pixels, and the output must be the tiles. I really don't care what the 'magic box' is, and I don't even care what the platform is.
I ran into the same problem when creating custom tiles, and you are on the right track with your solution of creating one tile at a time. You just need to add some strategy to the process. What I do is like this:
Pseudo code:
for each tile {
- determine the lat/lon corners of the tile.
- query the database and load the objects that are within this tile.
for each object{
- calculate the tile pixels on which the object should be painted. [*A*]
- draw the object on the tile.
- Save the tile. (you're done with this tile).
}
}
alternatively:
Pseudo code:
- for each object to be drawn {
- determine what tile the object should be painted on.
- calculate the tile pixels on which the object should be painted.[*A*]
- get that tile, if it doesn't yet exist create a new one.
- draw the object on the tile.
- Save the tile. (you might need to draw more on this tile later)
}
I do this with Perl and the GD library.
[*A*] When painting objects that span more than one tile, if the object begins on the current tile then part of it will be left out automatically because you'll be attempting to paint outside the tile, while if the object began on the previous tile and you're drawing the second part then the pixel numbers should be negative, meaning that it began on the neighbor tile.
This is a bit hard to explain in a written post so please feel free to ask for further clarification if you need it and I'll edit the answer.
I'd recommend getting to know GDAL (http://gdal.org) and it's libraries. It has libraries for rasterization, tiling, data conversion, projections, warping, and much more.

How WebGL works?

I'm looking for deep understanding of how WebGL works. I'm wanting to gain knowledge at a level that most people care less about, because the knowledge isn't necessary useful to the average WebGL programmer. For instance, what role does each part(browser, graphics driver, etc..) of the total rendering system play in getting an image on the screen?
Does each browser have to create a javascript/html engine/environment in order to run WebGL in browser? Why is chrome a head of everyone else in terms of being WebGL compatible?
So, what's some good resources to get started? The kronos specification is kind of lacking( from what I saw browsing it for a few minutes ) for what I'm wanting. I'm wanting mostly how is this accomplished/implemented in browsers and what else needs to change on your system to make it possible.
Hopefully this little write-up is helpful to you. It overviews a big chunk of what I've learned about WebGL and 3D in general. BTW, if I've gotten anything wrong, somebody please correct me -- because I'm still learning, too!
Architecture
The browser is just that, a Web browser. All it does is expose the WebGL API (via JavaScript), which the programmer does everything else with.
As near as I can tell, the WebGL API is essentially just a set of (browser-supplied) JavaScript functions which wrap around the OpenGL ES specification. So if you know OpenGL ES, you can adopt WebGL pretty quickly. Don't confuse this with pure OpenGL, though. The "ES" is important.
The WebGL spec was intentionally left very low-level, leaving a lot to
be re-implemented from one application to the next. It is up to the
community to write frameworks for automation, and up to the developer
to choose which framework to use (if any). It's not entirely difficult
to roll your own, but it does mean a lot of overhead spent on
reinventing the wheel. (FWIW, I've been working on my own WebGL
framework called Jax for a while
now.)
The graphics driver supplies the implementation of OpenGL ES that actually runs your code. At this point, it's running on the machine hardware, below even the C code. While this is what makes WebGL possible in the first place, it's also a double edged sword because bugs in the OpenGL ES driver (which I've noted quite a number of already) will show up in your Web application, and you won't necessarily know it unless you can count on your user base to file coherent bug reports including OS, video hardware and driver versions. Here's what the debug process for such issues ends up looking like.
On Windows, there's an extra layer which exists between the WebGL API and the hardware: ANGLE, or "Almost Native Graphics Layer Engine". Because the OpenGL ES drivers on Windows generally suck, ANGLE receives those calls and translates them into DirectX 9 calls instead.
Drawing in 3D
Now that you know how the pieces come together, let's look at a lower level explanation of how everything comes together to produce a 3D image.
JavaScript
First, the JavaScript code gets a 3D context from an HTML5 canvas element. Then it registers a set of shaders, which are written in GLSL ([Open] GL Shading Language) and essentially resemble C code.
The rest of the process is very modular. You need to get vertex data and any other information you intend to use (such as vertex colors, texture coordinates, and so forth) down to the graphics pipeline using uniforms and attributes which are defined in the shader, but the exact layout and naming of this information is very much up to the developer.
JavaScript sets up the initial data structures and sends them to the WebGL API, which sends them to either ANGLE or OpenGL ES, which ultimately sends it off to the graphics hardware.
Vertex Shaders
Once the information is available to the shader, the shader must transform the information in 2 phases to produce 3D objects. The first phase is the vertex shader, which sets up the mesh coordinates. (This stage runs entirely on the video card, below all of the APIs discussed above.) Most usually, the process performed on the vertex shader looks something like this:
gl_Position = PROJECTION_MATRIX * VIEW_MATRIX * MODEL_MATRIX * VERTEX_POSITION
where VERTEX_POSITION is a 4D vector (x, y, z, and w which is usually set to 1); VIEW_MATRIX is a 4x4 matrix representing the camera's view into the world; MODEL_MATRIX is a 4x4 matrix which transforms object-space coordinates (that is, coords local to the object before rotation or translation have been applied) into world-space coordinates; and PROJECTION_MATRIX which represents the camera's lens.
Most often, the VIEW_MATRIX and MODEL_MATRIX are precomputed and
called MODELVIEW_MATRIX. Occasionally, all 3 are precomputed into
MODELVIEW_PROJECTION_MATRIX or just MVP. These are generally meant
as optimizations, though I'd like find time to do some benchmarks. It's
possible that precomputing is actually slower in JavaScript if it's
done every frame, because JavaScript itself isn't all that fast. In
this case, the hardware acceleration afforded by doing the math on the
GPU might well be faster than doing it on the CPU in JavaScript. We can
of course hope that future JS implementations will resolve this potential
gotcha by simply being faster.
Clip Coordinates
When all of these have been applied, the gl_Position variable will have a set of XYZ coordinates ranging within [-1, 1], and a W component. These are called clip coordinates.
It's worth noting that clip coordinates is the only thing the vertex shader really
needs to produce. You can completely skip the matrix transformations
performed above, as long as you produce a clip coordinate result. (I have even
experimented with swapping out matrices for quaternions; it worked
just fine but I scrapped the project because I didn't get the
performance improvements I'd hoped for.)
After you supply clip coordinates to gl_Position WebGL divides the result by gl_Position.w producing what's called normalized device coordinates.
From there, projecting a pixel onto the screen is a simple matter of multiplying by 1/2 the screen dimensions and then adding 1/2 the screen dimensions.[1] Here are some examples of clip coordinates translated into 2D coordinates on an 800x600 display:
clip = [0, 0]
x = (0 * 800/2) + 800/2 = 400
y = (0 * 600/2) + 600/2 = 300
clip = [0.5, 0.5]
x = (0.5 * 800/2) + 800/2 = 200 + 400 = 600
y = (0.5 * 600/2) + 600/2 = 150 + 300 = 450
clip = [-0.5, -0.25]
x = (-0.5 * 800/2) + 800/2 = -200 + 400 = 200
y = (-0.25 * 600/2) + 600/2 = -150 + 300 = 150
Pixel Shaders
Once it's been determined where a pixel should be drawn, the pixel is handed off to the pixel shader, which chooses the actual color the pixel will be. This can be done in a myriad of ways, ranging from simply hard-coding a specific color to texture lookups to more advanced normal and parallax mapping (which are essentially ways of "cheating" texture lookups to produce different effects).
Depth and the Depth Buffer
Now, so far we've ignored the Z component of the clip coordinates. Here's how that works out. When we multiplied by the projection matrix, the third clip component resulted in some number. If that number is greater than 1.0 or less than -1.0, then the number is beyond the view range of the projection matrix, corresponding to the matrix zFar and zNear values, respectively.
So if it's not in the range [-1, 1] then it's clipped entirely. If it is in that range, then the Z value is scaled to 0 to 1[2] and is compared to the depth buffer[3]. The depth buffer is equal to the screen dimensions, so that if a projection of 800x600 is used, the depth buffer is 800 pixels wide and 600 pixels high. We already have the pixel's X and Y coordinates, so they are plugged into the depth buffer to get the currently stored Z value. If the Z value is greater than the new Z value, then the new Z value is closer than whatever was previously drawn, and replaces it[4]. At this point it's safe to light up the pixel in question (or in the case of WebGL, draw the pixel to the canvas), and store the Z value as the new depth value.
If the Z value is greater than the stored depth value, then it is deemed to be "behind" whatever has already been drawn, and the pixel is discarded.
[1]The actual conversion uses the gl.viewport settings to convert from normalized device coordinates to pixels.
[2]It's actually scaled to the gl.depthRange settings. They default 0 to 1.
[3]Assuming you have a depth buffer and you've turned on depth testing with gl.enable(gl.DEPTH_TEST).
[4]You can set how Z values are compared with gl.depthFunc
I would read these articles
http://webglfundamentals.org/webgl/lessons/webgl-how-it-works.html
Assuming those articles are helpful, the rest of the picture is that WebGL runs in a browser. It renderers to a canvas tag. You can think of a canvas tag like an img tag except you use the WebGL API to generate an image instead of download one.
Like other HTML5 tags the canvas tag can be styled with CSS, be under or over other parts of the page. Is composited (blended) with other parts of the page. Be transformed, rotated, scaled by CSS along with other parts of the page. That's a big difference from OpenGL or OpenGL ES.

Writing a paint program à la MS Paint - how to interpolate between mouse move events?

I want to write a paint program in the style of MS Paint.
For painting things on screen when the user moves the mouse, I have to wait for mouse move events and draw on the screen whenever I receive one. Apparently, mose move events are not sent very often, so I have to interpolate the mouse movement by drawing a line between the current mouse position and the previous one. In pseudocode, this looks something like this:
var positionOld = null
def handleMouseMove(positionNew):
if mouse.button.down:
if positionOld == null:
positionOld = positionNew
screen.draw.line(positionOld,positionNew)
positionOld = positionNew
Now my question: interpolating with straight line segments looks too jagged for my taste, can you recommend a better interpolation method? What method do GIMP or Adobe Photoshop implement?
Alternatively, is there a way to increase the frequency of the mouse move events that I receive? The GUI framework I'm using is wxWidgets.
GUI framework: wxWidgets.
(Programming language: Haskell, but that's irrelevant here)
EDIT: Clarification: I want something that looks smoother than straight line segments, see the picture (original size):
EDIT2: The code I'm using looks like this:
-- create bitmap and derive drawing context
im <- imageCreateSized (sy 800 600)
bitmap <- bitmapCreateFromImage im (-1) -- wxBitmap
dc <- memoryDCCreate -- wxMemoryDC
memoryDCSelectObject dc bitmap
...
-- handle mouse move
onMouse ... sw (MouseLeftDrag posNew _) = do
...
line dc posOld posNew [color := white
, penJoin := JoinRound
, penWidth := 2]
repaint sw -- a wxScrolledWindow
-- handle paint event
onPaint ... = do
...
-- draw bitmap on the wxScrolledWindow
drawBitmap dc_sw bitmap pointZero False []
which might make a difference. Maybe my choices of wx-classes is why I'm getting a rather low frequency of mouse move events.
Live demos
version 1 - more smooth, but more changing while you draw: http://jsfiddle.net/Ub7RV/1/
version 2 - less smooth but more stable: http://jsfiddle.net/Ub7RV/2/
The way to go is
Spline interpolation of the points
The solution is to store coordinates of the points and then perform spline interpolation.
I took the solution demonstrated here and modified it. They computed the spline after you stop drawing. I modified the code so that it draws immediately. You might see though that the spline is changing during the drawing. For real application, you probably will need two canvases - one with the old drawings and the other with just the current drawing, that will change constantly until your mouse stops.
Version 1 uses spline simplification - deletes points that are close to the line - which results in smoother splines but produce less "stable" result. Version 2 uses all points on the line and produces much more stable solution though (and computationally less expensive).
You can make them really smooth using splines:
http://freespace.virgin.net/hugo.elias/graphics/x_bezier.htm
But you'll have to delay the drawing of each line segment until one frame later, so that you have the start and end points, plus the next and previous points available for the calculation.
so, as I see the problem of jagged edge of freehand made curve, when the mouse are moved very fast, is not solved !!! In my opinion there are need to work around with the polling frequency of mousemove event in the system i.e. using different mouse driver or smf.. And the second way is the math.. using some kind of algorithm, to accuratly bend the straight line between two points when the mouse event is polled out.. For clear view you can compare how is drawed free hand line in photoshop and how in mspaint.. thanks folks.. ;)
I think you need to look into the Device Context documentation for wxWidgets.
I have some code that draws like this:
//screenArea is a wxStaticBitmap
int startx, starty;
void OnMouseDown(wxMouseEvent& event)
{
screenArea->CaptureMouse();
xstart = event.GetX();
ystart = event.GetY();
event.Skip();
}
void OnMouseMove(wxMouseEvent& event)
{
if(event.Dragging() && event.LeftIsDown())
{
wxClientDC dc(screenArea);
dc.SetPen(*wxBLACK_PEN);
dc.DrawLine(startx, starty, event.GetX(), event.GetY());
}
startx = event.GetX();
starty = event.GetY();
event.Skip();
}
I know it's C++ but you said the language was irrelevant, so I hope it helps anyway.
This lets me do this:
which seems significantly smoother than your example.
Interpolating mouse movements with line segments is fine, GIMP does it that way, too, as the following screenshot from a very fast mouse movement shows:
So, smoothness comes from a high frequency of mouse move events. WxWidgets can do that, as the example code for a related question demonstrates.
The problem is in your code, Heinrich. Namely, drawing into a large bitmap first and then copying the whole bitmap to the screen is not cheap! To estimate how efficient you need to be, compare your problem to video games: a smooth rate of 30 mouse move events per second correspond to 30fps. Copying a double buffer is no problem for modern machines, but WxHaskell is likely not optimized for video games, so it's not surprising that you experience some jitter.
The solution is to draw only as much as necessary, i.e. just the lines, directly on the screen, for example as shown in the link above.
I agree with harviz - the problem isn't solved. It should be solved on the operating system level by recording mouse movements in a priority thread, but no operating system I know of does that. However, the app developer can also work around this operating system limitation by interpolating better than linear.
Since mouse movement events don't always come fast enough, linear interpolation isn't always enough.
I experimented a little bit with the spline idea brought up by Rocketmagnet.
Instead of putting a line between two points A and D, look at the point P preceding A and use a cubic spline with the following control points B = A + v' and C = D - w', where
v = A - P,
w = D - A,
w' = w / 4 and
v' = v * |w| / |v| / 4.
This means we fall into the second point with the same angle as the line interpolation would, but go out a starting point in the same angle the previous segment came in, making the edge smooth. We use the length of the segment for both control point distances to make the size of the bend fit its proportion.
The following picture shows the result with very few data points (indicated in grey).
The sequence starts at the top left and ends in the middle.
There is still some level of uneasiness here which may be alleviated if one uses both the previous and the next point to adjust for both angles, but that would also mean to draw one point less than what one has got. I find this result already satisfactory, so I didn't try.

Resources