OpenGL 3.0 ES - Disjoint read and write from same texture - opengl-es

I'm in a situation where I have a (n+1)x(n+1) texture attached to a framebuffer.
My idea is to update this texture in 2 passes.
Draw a full screen quad and use the scissor test to mask out the outermost 1 pixels, so I write to the n x n 'inner' of the texture.
Second pass where I draw line primitives to write to the outermost pixels, and update the same texture, using the result of pass 1 as input. The computation on this step depend on the state of the inner (n x n) grid which was computed in the first pass.
In the second pass I would bind the result of the first pass for both reading and writing. I know that according to OpenGL this would yield undefined behaviour, but since I am never reading and writing simultaneously from the same texels, I think it could work.
Is it a good idea to do it like this, or is it perhaps better to draw a full screen quad, and in my glsl shader do a check like:
if (gl_FragCoord.x == 0.5 || gl_FragCoord.x == n + 0.5){
...
}

As you say, OpenGL says it's undefined behaviour, so it won't work. Your best bet for things like this is to ping-pong between two render targets.
Since I am never reading and writing simultaneously from the same
texels, I think it could work.
Unfortunately this doesn't help. Actually, there are extensions common on mobile (particularly Mali and PowerVR GPUs) that allow you to read and write simultaneously, but only if it's the same texels. Google 'Pixel Local Storage' for details.

Related

OpenGL ES 3.x How to (performantly) render blended triangles front-to-back with alpha-blending and early-reject occluded fragments?

I recently found out that one can render alpha-blended primitives correctly not just back-to-front but also front-to-back (http://hacksoflife.blogspot.com/2010/02/alpha-blending-back-to-front-front-to.html) by using GL_ONE_MINUS_DST_ALPHA, GL_ONE, premultiplying the fragment's alpha in the fragment shader and clearing destination alpha to black before rendering.
It occurred to me that it would then be great if one could combine this with EITHER early-z rejection OR some kind of early "destination-alpha testing" in order to discard fragments that won't contribute to the final pixel color.
When rendering with front-to-back alpha-blending, a fragment can be skipped if the destination-alpha at this location already contains the value 1.0.
I did prototype-implement that by using GL_EXT_shader_framebuffer_fetch to test the destination alpha at the start of the pixel shader and then manually discard the fragment if the value is above a certain threshold. That works but it made things actually slower on my test hardware (Snapdragon XR2) - so I wonder:
whether it's somehow possible to not even have the fragment shader execute if destination alpha is already above a certain threshold?
alternatively, if it would be possible to only write to the depth buffer for fragments that are completely opaque and leave the current depth buffer value unchanged for all fragments that have an alpha value of less than 1 (but still depth-test every fragment), that should allow the hardware to use early-z rejection for occluded fragments. So,
Is this possible somehow (i.e. use depth testing, but update the depth buffer value only for opaque fragments and leave it unchanged for others)?
bottom line this would allow to reduce overdraw of alpha-blended sprites to only those fragments that contribute to the final pixel color and I wonder whether there is a performant way of doing this.
For number 2, I think you could modify gl_FragDepth in the fragment shader to achieve something close, but doing so would disable early-z rejection so wouldn't really help.
I think one viable way to reduce overdraw would be to create a tool to generate a mesh for each sprite which aims to cover a decent proportion of the opaque part of the sprite without using too many verts. I imagine for a typical sprite, even just a well placed quad could cover 80%+.
You'd render the generated opaque geometry of your sprites with depth write enabled, and do a second pass the ordinary way with depth testing enabled to cover the transparent parts.
You would massively reduce overdraw, but significantly increase the complexity of your code and number of verts rendered. You would double your draw calls, but if you're atlassing and using texture arrays, you might be doubling from 1 to 2 draw calls which is fine. I've never tried it so can't say if it's worth all the effort that would be involved.

Not calling glclear results in weird artifacts

im wanting to NOT call glClear for the depth or color bit because i want to be able to see all the previously rendered frames. And it does work except it repeats the model all over the x and y axis, and also causes some strange grey blocky lines. Is there a way to accomplish this? Im using opengl es 3 on android. Thank you for any help.
The contents of the default framebuffer at the start of a frame is undefined, especially on tile-based renderers, which most of the mobile GPUs are. Your "repeats" in the X and Y axis are likely just showing how big the tiles are on your particular GPU (e.g. it's just dumping out whatever is in the GPU local tile RAM, repeated N times to completely cover the screen).
If you want to render on top of the previous frame you need to configure the rendering context configuration to use EGL_BUFFER_PRESERVED (the default is EGL_BUFFER_DESTROYED). E.g:
eglSurfaceAttrib(m_display, m_surface, EGL_SWAP_BEHAVIOR, EGL_BUFFER_PRESERVED);
Note 1: this will incur some overhead (the surface is effectively copied back into tile-local memory), whereas starting with a surface discard or invalidate, or a clear is usually free.
Note 2: this will only preserve color data; there is no means to preserve depth or stencil across frames for the default framebuffer.

Summed area table in GLSL and GPU fragment shader execution

I am trying to compute the integral image (aka summed area table) of a texture I have in the GPU memory (a camera capture), the goal being to compute the adaptive threshold of said image. I'm using OpenGL ES 2.0, and still learning :).
I did a test with a simple gaussian blur shader (vertical/horizontal pass), which is working fine, but I need a way bigger variable average area for it to give satisfactory results.
I did implement a version of that algorithm on CPU before, but I'm a bit confused on how to implement that on a GPU.
I tried to do a (completely incorrect) test with just something like this for every fragment :
#version 100
#extension GL_OES_EGL_image_external : require
precision highp float;
uniform sampler2D u_Texture; // The input texture.
varying lowp vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
uniform vec2 u_PixelDelta; // Pixel delta
void main()
{
// get neighboring pixels values
float center = texture2D(u_Texture, v_TexCoordinate).r;
float a = texture2D(u_Texture, v_TexCoordinate + vec2(u_PixelDelta.x * -1.0, 0.0)).r;
float b = texture2D(u_Texture, v_TexCoordinate + vec2(0.0, u_PixelDelta.y * 1.0)).r;
float c = texture2D(u_Texture, v_TexCoordinate + vec2(u_PixelDelta.x * -1.0, u_PixelDelta.y * 1.0)).r;
// compute value
float pixValue = center + a + b - c;
// Result stores value (R) and original gray value (G)
gl_FragColor = vec4(pixValue, center, center, 1.0);
}
And then another shader to get the area that I want and then get the average. This is obviously wrong as there's multiple execution units operating at the same time.
I know that the common way of computing a prefix sum on a GPU is to do it in two pass (vertical/horizontal, as discussed here on this thread or or here), but isn't there a problem here as there is a data dependency on each cell from the previous (top or left) one ?
I can't seem to understand the order in which the multiple execution units on a GPU will process the different fragments, and how a two-pass filter can solve that issue. As an example, if I have some values like this :
2 1 5
0 3 2
4 4 7
The two pass should give (first columns then rows):
2 1 5 2 3 8
2 4 7 -> 2 6 13
6 8 14 6 14 28
How can I be sure that, as an example, the value [0;2] will be computed as 6 (2 + 4) and not 4 (0 + 4, if the 0 hasn't been computed yet) ?
Also, as I understand that fragments are not pixels (If I'm not mistaken), would the values I store back in one of my texture in the first pass be the same in another pass if I use the exact same coordinates passed from the vertex shader, or will they be interpolated in some way ?
Tommy and Bartvbl address your questions about a summed-area table, but your core problem of an adaptive threshold may not need that.
As part of my open source GPUImage framework, I've done some experimentation with optimizing blurs over large radii using OpenGL ES. Generally, increasing blur radii leads to a significant increase in texture sampling and calculations per pixel, with an accompanying slowdown.
However, I found that for most blur operations you can apply a surprisingly effective optimization to cap the number of blur samples. If you downsample the image before blurring, blur at a smaller pixel radius (radius / downsampling factor), and then linearly upsample, you can arrive at a blurred image that is the equivalent of one blurred at a much larger pixel radius. In my tests, these downsampled, blurred, and then upsampled images look almost identical to the ones blurred based on the original image resolution. In fact, precision limits can lead to larger-radii blurs done at a native resolution breaking down in image quality past a certain size, where the downsampled ones maintain the proper image quality.
By adjusting the downsampling factor to keep the downsampled blur radius constant, you can achieve near constant-time blurring speeds in the face of increasing blur radii. For a adaptive threshold, the image quality should be good enough to use for your comparisons.
I use this approach in the Gaussian and box blurs within the latest version of the above-linked framework, so if you're running on Mac, iOS, or Linux, you can evaluate the results by trying out one of the sample applications. I have an adaptive threshold operation based on a box blur that uses this optimization, so you can see if the results there are what you want.
AS per the above, it's not going to be fantastic on a GPU. But assuming the cost of shunting data between the GPU and CPU is more troubling it may still be worth persevering.
The most obvious prima facie solution is to split horizontal/vertical as discussed. Use an additive blending mode, create a quad that draws the whole source image then e.g. for the horizontal step on a bitmap of width n issue a call that requests the quad be drawn n times, the 0th time at x = 0, the mth time at x = m. Then ping pong via an FBO, switching the target of buffer of the horizontal draw into the source texture for the vertical.
Memory accesses are probably O(n^2) (i.e. you'll probably cache quite well, but that's hardly a complete relief) so it's a fairly poor solution. You could improve it by divide and conquer by doing the same thing in bands — e.g. for the vertical step, independently sum individual rows of 8, after which the error in every row below the final is the failure to include whatever the sums are on that row. So perform a second pass to propagate those.
However an issue with accumulating in the frame buffer is clamping to avoid overflow — if you're expecting a value greater than 255 anywhere in the integral image then you're out of luck because the additive blending will clamp and GL_RG32I et al don't reach ES prior to 3.0.
The best solution I can think of to that, without using any vendor-specific extensions, is to split up the bits of your source image and combine channels after the fact. Supposing your source image were 4 bit and your image less than 256 pixels in both directions, you'd put one bit each in the R, G, B and A channels, perform the normal additive step, then run a quick recombine shader as value = A + (B*2) + (G*4) + (R*8). If your texture is larger or smaller in size or bit depth then scale up or down accordingly.
(platform specific observation: if you're on iOS then you've hopefully already got a CVOpenGLESTextureCache in the loop, which means you have CPU and GPU access to the same texture store, so you might well prefer to kick this step off to GCD. iOS is amongst the platforms supporting EXT_shader_framebuffer_fetch; if you have access to that then you can write any old blend function you like and at least ditch the combination step. Also you're guaranteed that preceding geometry has completed before you draw so if each strip writes its totals where it should and also to the line below then you can perform the ideal two-pixel-strips solution with no intermediate buffers or state changes)
What you attempt to do cannot be done in a fragment shader. GPU's are by nature very different to CPU's by executing their instructions in parallel, in massive numbers at the same time. Because of this, OpenGL does not make any guarantees about execution order, because the hardware physically doesn't allow it to.
So there is not really any defined order other than "whatever the GPU thread block scheduler decides".
Fragments are pixels, sorta-kinda. They are pixels that potentially end up on screen. If another triangle ends up in front of another, the previous calculated colour value is discarded. This happens regardless of whatever colour was stored at that pixel in the colour buffer previously.
As for creating the summed area table on the GPU, I think you may first want to look at GLSL "Compute Shaders", which are specifically made for this sort of thing.
I think you may be able to get this to work by creating a single thread for each row of pixels in the table, then have every thread "lag behind" by 1 pixel compared to the previous row.
In pseudocode:
int row_id = thread_id()
for column_index in (image.cols + image.rows):
int my_current_column_id = column_index - row_id
if my_current_column_id >= 0 and my_current_column_id < image.width:
// calculate sums
The catch of this method is that all threads should be guaranteed to execute their instructions simultaneously without getting ahead of one another. This is guaranteed in CUDA, but I'm not sure whether it is in OpenGL compute shaders. It may be a starting point for you, though.
It may look surprising for the beginner but the prefix sum or SAT calculation is suitable for parallelization. As the Hensley algorithm is the most intuitive to understand (also implemented in OpenGL), more work-efficient parallel methods are available, see CUDA scan. The paper from Sengupta discuss parallel method which seems state-of-the-art efficient method with reduce and down swap phases. These are valuable materials but they do not enter OpenGL shader implementations in detail. The closest document is the presentation you have found (it refers to Hensley publication), since it has some shader snippets. This is the job which is doable entirely in fragment shader with FBO Ping-Pong. Note that the FBO and its texture need to have internal format set to high precision - GL_RGB32F would be best but I am not sure if it is supported in OpenGL ES 2.0.

Scrolling parallax background, infinitely repeated in libgdx

I'm making a 2D sidescrolling space shooter-type game, where I need a background that can be scrolled infintely (it is tiled or wrapped repeatedly). I'd also like to implement parallax scrolling, so perhaps have one lowest background nebula texture that barely moves, a higher one containing far-away stars that barely moves and the highest background containing close stars that moves a lot.
I see from google that I'd have each layer move 50% less than the layer above it, but how do I implement this in libgdx? I have a Camera that can be zoomed in and out, and in the physical 800x480 screen could show anything from 128x128 pixels (a ship) to a huge area of space featuring the textures wrapped multiple times on their edges.
How do I continuosly wrap a smaller texture (say 512x512) as if it were infinitely tiled (for when the camera is zoomed right out), and then how do I layer multiple textures like these, keep them together in a suitable structure (is there one in the libgdx api?) and move them as the player's coords change? I've looked at the javadocs and the examples but can't find anything like this problem, apologies if it's obvious!
Hey I am also making a parrallax background and trying to get it to scroll.
There is a ParallaxTest.java in the repository, it can be found here.
this file is a standalone class, so you will need to incorporate it into your game how you want. and you will need to change the control input since its hooked up to use touch screen/mouse.
this worked for me. as for repeated bg, i havent gotten that far yet, but i think you just need to basic logic as in, ok one screen away from the end, change the first few screens pos to line up at the end.
I have not much more to say regarding to the Parallax Scrolling than PFG already did. There is indeed an example in the repository under the test folder and several explanations around the web. I liked this one.
The matter with the background is really easy to solve. This and other related problems can be approached by using modular algebra. I won't go into the details because once shown is very easy to understand.
Imagine that you want to show a compass in your screen. You have a texture 1024x16 representing the cardinal points. Basically all you have is a strip. Letting aside the considerations about the real orientation and such, you have to render it.
Your viewport is 300x400 for example, and you want 200px of the texture on screen (to make it more interesting). You can render it perfectly with a single region until you reach the position (1024-200) = 824. Once you're in this position clearly there is no more texture. But since it is a compass, it's obvious that once you reach the end of it, it has to start again. So this is the answer. Another texture region will do the trick. The range 825-1023 has to be represented by another region. The second region will have a size of (1024-pos) for every value pos>824 && pos<1024
This code is intended to work as real example of a compass. It's very dirty since it works with relative positions all the time due to the conversion between the range (0-3.6) to (0-1024).
spriteBatch.begin();
if (compassorientation<0)
compassorientation = (float) (3.6 - compassorientation%3.6);
else
compassorientation = (float) (compassorientation % 3.6);
if ( compassorientation < ((float)(1024-200)/1024*3.6)){
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 200, 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth(), 32 * (float)1.2);
}
else if (compassorientation > ((float)(1024-200)/1024*3.6)) {
compass1.setRegion((int)(compassorientation/3.6*1024), 0, 1024 - (int)(compassorientation/3.6*1024), 16);
spriteBatch.draw(compass1, 0, (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , 32 * (float)1.2);
compass2.setRegion(0, 0, 200 - compass1.getRegionWidth(), 16);
spriteBatch.draw(compass2, compass1.getRegionWidth()/200f * Gdx.graphics.getWidth() , (Gdx.graphics.getHeight()/2) -(-250 + compass1.getTexture().getHeight()* (float)1.2), Gdx.graphics.getWidth() - (compass1.getRegionWidth()/200f * Gdx.graphics.getWidth()) , 32 * (float)1.2);
}
spriteBatch.end();
You can use setWrap function like below:
Texture texture = new Texture(Gdx.files.internal("images/background.png"));
texture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
It will draw background repeatedly! Hope this help!
Beneath where you initialize your Texture for the object. Then beneath that type in this
YourTexture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);
Where YourTexture is your texture that you want to parallax scroll.
In Your render file type in this code.
batch.draw(YourTexture,0, 0, 0 , srcy, Gdx.graphics.getWidth(),
Gdx.graphics.getHeight());
srcy +=10;
It is going to give you an error so make a variable called srcy. It is nothing too fancy.
Int srcy

How WebGL works?

I'm looking for deep understanding of how WebGL works. I'm wanting to gain knowledge at a level that most people care less about, because the knowledge isn't necessary useful to the average WebGL programmer. For instance, what role does each part(browser, graphics driver, etc..) of the total rendering system play in getting an image on the screen?
Does each browser have to create a javascript/html engine/environment in order to run WebGL in browser? Why is chrome a head of everyone else in terms of being WebGL compatible?
So, what's some good resources to get started? The kronos specification is kind of lacking( from what I saw browsing it for a few minutes ) for what I'm wanting. I'm wanting mostly how is this accomplished/implemented in browsers and what else needs to change on your system to make it possible.
Hopefully this little write-up is helpful to you. It overviews a big chunk of what I've learned about WebGL and 3D in general. BTW, if I've gotten anything wrong, somebody please correct me -- because I'm still learning, too!
Architecture
The browser is just that, a Web browser. All it does is expose the WebGL API (via JavaScript), which the programmer does everything else with.
As near as I can tell, the WebGL API is essentially just a set of (browser-supplied) JavaScript functions which wrap around the OpenGL ES specification. So if you know OpenGL ES, you can adopt WebGL pretty quickly. Don't confuse this with pure OpenGL, though. The "ES" is important.
The WebGL spec was intentionally left very low-level, leaving a lot to
be re-implemented from one application to the next. It is up to the
community to write frameworks for automation, and up to the developer
to choose which framework to use (if any). It's not entirely difficult
to roll your own, but it does mean a lot of overhead spent on
reinventing the wheel. (FWIW, I've been working on my own WebGL
framework called Jax for a while
now.)
The graphics driver supplies the implementation of OpenGL ES that actually runs your code. At this point, it's running on the machine hardware, below even the C code. While this is what makes WebGL possible in the first place, it's also a double edged sword because bugs in the OpenGL ES driver (which I've noted quite a number of already) will show up in your Web application, and you won't necessarily know it unless you can count on your user base to file coherent bug reports including OS, video hardware and driver versions. Here's what the debug process for such issues ends up looking like.
On Windows, there's an extra layer which exists between the WebGL API and the hardware: ANGLE, or "Almost Native Graphics Layer Engine". Because the OpenGL ES drivers on Windows generally suck, ANGLE receives those calls and translates them into DirectX 9 calls instead.
Drawing in 3D
Now that you know how the pieces come together, let's look at a lower level explanation of how everything comes together to produce a 3D image.
JavaScript
First, the JavaScript code gets a 3D context from an HTML5 canvas element. Then it registers a set of shaders, which are written in GLSL ([Open] GL Shading Language) and essentially resemble C code.
The rest of the process is very modular. You need to get vertex data and any other information you intend to use (such as vertex colors, texture coordinates, and so forth) down to the graphics pipeline using uniforms and attributes which are defined in the shader, but the exact layout and naming of this information is very much up to the developer.
JavaScript sets up the initial data structures and sends them to the WebGL API, which sends them to either ANGLE or OpenGL ES, which ultimately sends it off to the graphics hardware.
Vertex Shaders
Once the information is available to the shader, the shader must transform the information in 2 phases to produce 3D objects. The first phase is the vertex shader, which sets up the mesh coordinates. (This stage runs entirely on the video card, below all of the APIs discussed above.) Most usually, the process performed on the vertex shader looks something like this:
gl_Position = PROJECTION_MATRIX * VIEW_MATRIX * MODEL_MATRIX * VERTEX_POSITION
where VERTEX_POSITION is a 4D vector (x, y, z, and w which is usually set to 1); VIEW_MATRIX is a 4x4 matrix representing the camera's view into the world; MODEL_MATRIX is a 4x4 matrix which transforms object-space coordinates (that is, coords local to the object before rotation or translation have been applied) into world-space coordinates; and PROJECTION_MATRIX which represents the camera's lens.
Most often, the VIEW_MATRIX and MODEL_MATRIX are precomputed and
called MODELVIEW_MATRIX. Occasionally, all 3 are precomputed into
MODELVIEW_PROJECTION_MATRIX or just MVP. These are generally meant
as optimizations, though I'd like find time to do some benchmarks. It's
possible that precomputing is actually slower in JavaScript if it's
done every frame, because JavaScript itself isn't all that fast. In
this case, the hardware acceleration afforded by doing the math on the
GPU might well be faster than doing it on the CPU in JavaScript. We can
of course hope that future JS implementations will resolve this potential
gotcha by simply being faster.
Clip Coordinates
When all of these have been applied, the gl_Position variable will have a set of XYZ coordinates ranging within [-1, 1], and a W component. These are called clip coordinates.
It's worth noting that clip coordinates is the only thing the vertex shader really
needs to produce. You can completely skip the matrix transformations
performed above, as long as you produce a clip coordinate result. (I have even
experimented with swapping out matrices for quaternions; it worked
just fine but I scrapped the project because I didn't get the
performance improvements I'd hoped for.)
After you supply clip coordinates to gl_Position WebGL divides the result by gl_Position.w producing what's called normalized device coordinates.
From there, projecting a pixel onto the screen is a simple matter of multiplying by 1/2 the screen dimensions and then adding 1/2 the screen dimensions.[1] Here are some examples of clip coordinates translated into 2D coordinates on an 800x600 display:
clip = [0, 0]
x = (0 * 800/2) + 800/2 = 400
y = (0 * 600/2) + 600/2 = 300
clip = [0.5, 0.5]
x = (0.5 * 800/2) + 800/2 = 200 + 400 = 600
y = (0.5 * 600/2) + 600/2 = 150 + 300 = 450
clip = [-0.5, -0.25]
x = (-0.5 * 800/2) + 800/2 = -200 + 400 = 200
y = (-0.25 * 600/2) + 600/2 = -150 + 300 = 150
Pixel Shaders
Once it's been determined where a pixel should be drawn, the pixel is handed off to the pixel shader, which chooses the actual color the pixel will be. This can be done in a myriad of ways, ranging from simply hard-coding a specific color to texture lookups to more advanced normal and parallax mapping (which are essentially ways of "cheating" texture lookups to produce different effects).
Depth and the Depth Buffer
Now, so far we've ignored the Z component of the clip coordinates. Here's how that works out. When we multiplied by the projection matrix, the third clip component resulted in some number. If that number is greater than 1.0 or less than -1.0, then the number is beyond the view range of the projection matrix, corresponding to the matrix zFar and zNear values, respectively.
So if it's not in the range [-1, 1] then it's clipped entirely. If it is in that range, then the Z value is scaled to 0 to 1[2] and is compared to the depth buffer[3]. The depth buffer is equal to the screen dimensions, so that if a projection of 800x600 is used, the depth buffer is 800 pixels wide and 600 pixels high. We already have the pixel's X and Y coordinates, so they are plugged into the depth buffer to get the currently stored Z value. If the Z value is greater than the new Z value, then the new Z value is closer than whatever was previously drawn, and replaces it[4]. At this point it's safe to light up the pixel in question (or in the case of WebGL, draw the pixel to the canvas), and store the Z value as the new depth value.
If the Z value is greater than the stored depth value, then it is deemed to be "behind" whatever has already been drawn, and the pixel is discarded.
[1]The actual conversion uses the gl.viewport settings to convert from normalized device coordinates to pixels.
[2]It's actually scaled to the gl.depthRange settings. They default 0 to 1.
[3]Assuming you have a depth buffer and you've turned on depth testing with gl.enable(gl.DEPTH_TEST).
[4]You can set how Z values are compared with gl.depthFunc
I would read these articles
http://webglfundamentals.org/webgl/lessons/webgl-how-it-works.html
Assuming those articles are helpful, the rest of the picture is that WebGL runs in a browser. It renderers to a canvas tag. You can think of a canvas tag like an img tag except you use the WebGL API to generate an image instead of download one.
Like other HTML5 tags the canvas tag can be styled with CSS, be under or over other parts of the page. Is composited (blended) with other parts of the page. Be transformed, rotated, scaled by CSS along with other parts of the page. That's a big difference from OpenGL or OpenGL ES.

Resources