How SVGs are able to incoporate effects like blur? - image

I have been familiar with the SVG format since a long time. It's usability and benefits over a raster image as well.
But Recently I came to a situation where I needed blur effect in SVG(basically a asset defined by primitive shapes that mimics blur effect and is infinitely scalable), so I did a google search and much to my surprise there are official ways of doing it; I was expecting it not to be!
I am basically intrigued by the fact that if SVGs are really made up primitives shapes defined mathematically then how can it incorporate effect like blur. What shape can even be used for such a process?

In Firefox we render the SVG to an offscreen surface, blur the pixels on that surface and then blit the offscreen surface. I imagine other browsers work similarly.
Filters and masks are raster operations, most everything else is vector.

Related

Direct2D: Check if image is outside visible area before drawing?

Is it a reasonable optimization to omit calls to ID2D1HwndRenderTarget::DrawBitmap() if the image will end up outside the visible area? If I implement the checking logic in the application that will cost some performance, so if the first thing D2D does is doing the same check then I'd rather not do it.
I had a test with my application which renders some UI part using Direct2D (and attaching renderdoc), seems it is a bit random.
I render a mix of Rectangles, Text, Path geometries (beziers) and Rectangle with a bitmap brush (which should be equivalent to your DrawBitmap call).
Then I capture a frame with all those objects visible, and another one panning my UI (using transform) so objects are not visible.
From there could check what is drawn or not:
Text is always culled
Solid color rectangles are not culled
Most of the times path geometries are culled, but sometimes not.
Rectangle with bitmap brush are NEVER culled
So it seems Direct2D is making different decisions depending on the types of elements you plan to draw.
Since rectangles are easily batched and cheap to draw, it seems that they are just drawn regardless.
Bitmap rectangles and text require more work, so it seems they are effectively culled.
Path geometry was looking to depend on how many polygon the geometry is tesselated to (I had a path that was translating to 26 primitives and it was not culled, another one translating to 120 and it got culled).
So you can either trust Direct2D that it will perform that optimization, but I would personally implement a quick rectangle to rectangle check just in case (it's not gonna hurt your performances as its an extremely simple operation).

Unity, fresnel shader on raw image

Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.

Unity 4.6 Canvas - How to correctly apply 2D physics effects

Is there a way to universally multiply physics2D calculations on the canvas?
I'm trying to make a set of canvas UI elements with 2D physic properties. The objects contain images and text, but need to respond to gravity, impacts, and overlapping collision boxes with other GUI elements.
I've added 2D RigidBody and boxCollider components to my objects. However, they move very slowly. If given a gravity, they fall slowly. If overlapped, they push each other apart slowly.
I've figured out that this is due to the canvas having a very large scale. My objects are effectively 'very big and very far away'.
I can't modify the canvas scale. It needs to be huge or I get render artifacts.
I can't just modify gravity because it doesn't provide a universal fix. Things fall faster, but they don't push apart or spring right.
I can't modify the timestep because it affects the whole world, not just the canvas.
My canvas objects have widths akin to 80, where unity physics expects widths akin to 1. How can I get them to behave like they have a width of 1?
Is there some universal scaling factor for canvas based physics, or am I simply mis-using the canvas for something it is not intended for?
if you are still having this problem, the answer to fixing the movement rates of your scaled-up objects is to scale up your movement forces as well as your gravity. If you can't get certain elements to work right, use a forcepush that you can set to any strength.

Drawing with transparency in opengl es 2

I'm working on a simple drawing application. My line is constructed using polygons and things look good so far. I would like to add a transparency feature and I used glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); for that reason. However, because my polys sometimes overlap, I get the ugly result shown in the picture (multiple layers of transparency). What I would like to get is the figure in the left(no overlapping because there is no transparency), with an overall transparency.
I guess I could do this by keeping the polys from overlaping, but that would be a overkill for this task I think. There should be a way to control them at drawing time, but being a beginner with OpenGL doesn't help.
I'm sorry, but the way transparency works does not allow you to do what you want without manually keeping the polygons from overlapping. The way that transparency works is that it takes the colour of the surface below it, and uses the blending function you specify in order to calculate the final colour of the pixel.
In your program you are drawing multiple polygons with alpha on top of each other. That means that their colours add up, giving the result you see.
I've never actually written a drawing application, but you could perhaps take a look at triangle strips to draw your lines. They allow you to extend the line point by point, and make sure the geometry won't overlap with itself.

LibGDX - Sprite to Pixmap

I am using LibGDX for a small app project, and I need to somehow take a series of sprites and place them (or their pixels rather) into a Pixmap. The basic idea is to take random sprites that are generated through various means while the app is running, and, only at specific times, merge some of them onto a single background sprite.
I believe that most of this can be done easily, but the step of getting the sprite images into the Pixmap isn't quite so obvious to me. The sprites also have various transparent and semi-transparent pixels, so simply grabbing the color at each pixel while it is all on the same screen isn't really applicable either, as it obviously shouldn't take the background colors with it.
If there is a suitable alternative to this that would accomplish what I am looking for I would also love to hear it. Any help is highly appreciated.
I think you want to render your sprites to an off-screen buffer (called an "FBO" or FrameBuffer in libgdx) (blending them as they're added), and then render that offscreen buffer to the screen as a single draw call? If so, this question should help: libgdx SpriteBatch render to texture
This requires OpenGL ES 2.0, which will eliminate support for some older devices.

Resources