Looking for a blur algorithm to emulate a camera positioned further away - photography

I am researching ways to blur images and for our purposes it seems that simple gaussian blur algorithms do not cut it.
We are looking for a way to take a high-resolution image and process it into an image that matches an image of the same object photographed from further away. I imagine there is some form of perspective transform that needs to be known, in addition to a blur.
Anyone know of resources or algorithms geared toward processing an image to look like a true optical blur rather than a gaussian?

Related

How SVGs are able to incoporate effects like blur?

I have been familiar with the SVG format since a long time. It's usability and benefits over a raster image as well.
But Recently I came to a situation where I needed blur effect in SVG(basically a asset defined by primitive shapes that mimics blur effect and is infinitely scalable), so I did a google search and much to my surprise there are official ways of doing it; I was expecting it not to be!
I am basically intrigued by the fact that if SVGs are really made up primitives shapes defined mathematically then how can it incorporate effect like blur. What shape can even be used for such a process?
In Firefox we render the SVG to an offscreen surface, blur the pixels on that surface and then blit the offscreen surface. I imagine other browsers work similarly.
Filters and masks are raster operations, most everything else is vector.

Is there any way to implement this beautiful image effect?

Recently I found an amazing APP called Photo Lab,and I'm curious about one effect called Paper Rose.In the pictures below,one is the original picture,the other is the effected picture.My question is what kind of algorithm can do this effect,and it would be better if you can show me some code or demo.Thanks in advance!
enter image description here
enter image description here
I am afraid that this is not just an algorithm, but a complex piece of software.
The most difficult part is to model the shape of the rose. The petals are probably a meshed surface. It is not so difficult to give them a curved shape, but the hard issue is to group them in such a way that they do not intersect.
It is not quite impossible that this can be achieved by first putting them in a flat geometry where you can master intersections, then to wrap it around an axis with a king of polar transform. But I don't really believe in that. I rather think that they have a collision-avoiding geometric modeller.
The next steps, which are more classical, are to texture-map the pictures onto the petals and to perform the realistic rendering of the whole scene.
But there's another option, which I'll call the "poor man's rendering".
You can start from a real picture of a paper rose, where the petals have an empty black, thick frame. Then on the picture, you detect (either in some automated way or just by hand) points that correspond to a regular grid on the flattened paper.
As the petals are not wholly visible, the hidden parts must be clipped out from the mesh, possibly by using a polygonal fence.
Now you can take any picture, fit it over the undistorted mesh, clip out the hidden areas and warp to the distorted position. Then by compositing tricks, you will give it a natural shaded appearance on the rose.
Note: the process is eased by drawing a complet grid inside the frame. Anyway, you will need to somehow erase it before doing the compositing, in order to retrieve just the shading information.
I would tend to believe that the second approach was used here, as I see a few mapping anomalies along some edges, which would not arise on a fully synthetic scene.
In any case, hard work.

Scaling drawing done in drawRect when view resizes

I'm still learning some of the ins and outs of custom view drawing in Cocoa.
I have a custom view where I draw lines and points based on the corresponding points in a larger rect elsewhere of a fixed size.
I would like to have my drawing scale up or down when the view is resized, but maintain an aspect ratio same as the larger rect.
What is the best way to scale the drawing?
Do I need to somehow apply an affine transform?
Or should I be drawing to an imageRef?
I'm not really sure exactly how to do ether one in this case or how to keep that in sync with the size of the view and the aspect ratio of the larger rect where coordinates come from.
Any tips or links to example code are greatly appreciated.
Concatenating an affine transform sounds like the right solution. Scaling by the same factor in both dimensions will preserve the aspect ratio of your drawing, and you can use simple division to compute the right factor (assuming you aren't just getting it from a slider or something).
If you haven't already, I highly recommend reading the Cocoa Drawing Guide and Quartz 2D Programming Guide. There's a lot of overlap, but the explanations are not copy-and-pasted, so if one guide's explanation of something doesn't make sense, look it up in the other one and try reading that version.

Is there a way to pre-render a virtual panoramic scene?

I would like to put a photorealistic virtual scene on a tablet so when the user rotates the tablet, it shows as if the tablet is a window to an virtual world.
Pre-rendered scenes can be rendered photorealistic, while real-time rendering has a "computer-made look". Given that for one scene, the POV can be rotated but not translated in space, is it possible that a pre-rendered virtual panoramic scene give an immersive impression?
I doubt that this is easy, since rotating the view point will cause some sort of distortion. This kind of distortion is easy for apps like Starwalk, but difficult for photos. Can anyone point me out a direction?
I know that this will be tremendously easy for restricting motion in only one direction, but I would like the user to have a full 3d experience.
You need to either warp the photographs before applying them as textures to your "sky dome" or use non uniform texture coordinates. If done right this will even out most of the distortions giving a more realistic appearance.
Another alternative is to use more photographs so that you are only actually using the central area of each one.
I've found that http://code.google.com/p/panoramagl/ can render cubic, spherical and cylindrical panoramic images, so the problem transforms to how to make render a panorama which can be solved by stitching. I will still leave this answer open to see if anyone else has better answers.

How do you mask an arbitrary area of an image to overlay another image?

I want to mask an arbitrary convex polygon area of an image and put another image into that area. I found this posting, but is wasn't clear to me if this applies only to rectangular areas and not arbitrary polygons.
The basic flow I am talking about is to have an (x,y) coordinate on the screen which would serve to be the center of my polygon (center in terms of an arbitrary point which is consistent for me). I would like to mask this area where the new image (polygonal in nature) would be displayed while leaving the rest of the screen as is.
Can I do this easily and quickly?
You have to use stencil buffer. It's basically another type of buffer that has plethora of awesome applications and one of the simplest one is masking. While I can't recommend any OpenGL ES specific tutorial off the top of my head, I highly recommend reading general tutorials, since it's not that different and surely is fascinating.
Try glScissor... it might be the rectangle you want.

Resources