How to make a high ISO effect in a dark room? - algorithm

When we use mobile or semiprofessional cameras in the room with bad light they increase ISO as a rule, the result is shown below:
This is a frame from video, as you can see there is a lot of noise. Maybe it's a little weird but I need to generate a similar noise on high quality video. However, a simple noise generator will produce a something like that:
Does anyone have some ideas how to get result like on the first frame? Or maybe there is a some existing noise generator/algorithm to make it? I will be grateful for any help.

I assume you simply added small random RGB to your image...
I would leave color as is and try to reduce color intensity by small random amount to darken the image and create a similar form of noise. Just by multiply your RGB color with random number less than one ...
To improve result visual the random value should be with gauss distribution or similar. Here small C++/VCL example:
//$$---- Form CPP ----
//---------------------------------------------------------------------------
#include <vcl.h>
#include <math.h>
#pragma hdrstop
#include "win_main.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TMain *Main;
Graphics::TBitmap *bmp0,*bmp1;
//---------------------------------------------------------------------------
__fastcall TMain::TMain(TComponent* Owner) : TForm(Owner)
{
// init bmps and load from file
bmp0=new Graphics::TBitmap;
bmp1=new Graphics::TBitmap;
bmp0->LoadFromFile("in.bmp");
bmp0->HandleType=bmDIB;
bmp0->PixelFormat=pf32bit;
bmp1->Assign(bmp0);
ClientWidth=bmp0->Width;
ClientHeight=bmp0->Height;
Randomize();
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormDestroy(TObject *Sender)
{
// free bmps before exit
delete bmp0;
delete bmp1;
}
//---------------------------------------------------------------------------
void __fastcall TMain::tim_updateTimer(TObject *Sender)
{
// skip if App not yet initialized
if (bmp0==NULL) return;
if (bmp1==NULL) return;
int x,y,i,a;
union _color
{
BYTE db[4];
DWORD dd;
};
// copy bmp0 into bmp1 with light reduction and noise
for (y=0;y<bmp0->Height;y++)
{
_color *p0=(_color*)bmp0->ScanLine[y];
_color *p1=(_color*)bmp1->ScanLine[y];
for (x=0;x<bmp0->Width;x++)
{
p1[x]=p0[x];
for (a=40,i=0;i<10;i++) a+=Random(10); // "gauss" PRNG in range 40 .. 140
for (i=0;i<3;i++) p1[x].db[i]=(DWORD(p1[x].db[i])*a)>>8; // multiply RGB by a/256
}
}
// render frame on App canvas
Canvas->Draw(0,0,bmp1);
// bmp1->SaveToFile("out.bmp");
}
//---------------------------------------------------------------------------
input image:
output image:
You can play with the PRNG properties to tweak light and noissyness.

What you see in that frame is not the raw sensor noise, nor a simulation of film grain as the accepted answer seems to suggest. Instead, it is the result of a noise-reduction filter that is applied to the high-ISO image. Without the filter, you’d see a lot of noise, mostly Poisson noise.
I don’t know what noise-reduction filter is built into the camera, but it likely is applied to the raw image, before conversion to RGB. Here are many papers describing such filters.

You basically need to increase "grain" size, and maybe flat out your noise spots. It will be hard to obtain a natural-looking result, as those grain spots in a video are obtained from various types of interpolations over values obtained from the camera sensor (with specific noise that a sensor produces). Take for example "analog" film cameras (the pictures taken with the film have that grainy natural look from the effective grains sizes/shapes of minerals that are used in the film itself). If it were so easy to produce a natural-looking film alike filter over digital images the film industry would not see the comeback that happens now.
With that said there are three things that come to my mind that could work:
Median Blur over (image + some noise): https://docs.opencv.org/master/d4/d13/tutorial_py_filtering.html
An image that has is generated with Perlin noise, scaled accordingly to your specific frame size + added over colors of your image * some factor:
https://github.com/ruslangrimov/perlin-noise-python-numpy
* factor + your image = result
Film a really dark room with a camera set up to produce the same frame resolution and with high iso, add obtained video frames over your video (some image manipulation to be applied).
Hope any of this helps and good luck with your project.

Related

Visual C++: Good way to draw and animated fill path to screen?

I want to use Visual C++ to animate fill paths to screen. I have done it with C# before, but now switch to C++ for better perfomance and want do more complex works in the future.
Here is the concept in C#:
In a Canvas I have a number of Path. These paths are closed geometries combine of LineTo and QuadraticBezierTo functions.
Firstly, I fill Silver color for all path.
Then for each path, I fill Green color from one end to other end (up/down/left/right direction) (imagine a progress bar increase its value from min to max). I do it by set the Fill brush of the path to a LinearGradientBrush with two color Green and Silver with same offset, then increase the offset from 0 to 1 by Timer.
When a path is fully green, continue with next path.
When all path is fill with Green, come back first step.
I want to do same thing in Visual C++. I need to know an effective way to:
Create and store paths in a collection to reuse. Because the path is quite lot of point, recreate them repeatly take lots of CPU usage.
Draw all paths to a window.
Do animation fill like step 2, 3, 4 in above concept.
So, what I need is:
A suitable way to create and store closed paths. Note: paths are combine of points connect by functions same with C# LineTo and QuadraticBezierTo function.
Draw and animated fill the paths to screen.
Can you please suggest one way to do above step? (outline what I have to read, then I can study about it myself). I know basic of Visual C++, Win32 GUI and a little about draw context (HDC) and GDI, but only start to learn Graphic/Drawing.
Sorry about my English! If anythings I explain dont clear, please let me know.
how many is quite lot of point ? what is the target framerate? for low enough counts you can use GDI for this otherwise you need HW acceleration like OpenGL,DirectX.
I assume 2D so You need:
store your path as list of segments
for example like this:
struct path_segment
{
int p0[2],p1[2],p2[2]; // points
int type; // line/bezier
float length; // length in pixels or whatever
};
const int MAX=1024; // max number of segments
path_segment path[MAX]; // list of segments can use any template like List<path_segment> path; instead
int paths=0; // actual number of segments;
float length=0.0; // while path length in pixels or whatever
write functions to load and render path[]
The render is just for visual check if you load is OK ... for now atlest
rewrite the render so
it take float t=<0,1> as input parameter which will render path below t with one color and the rest with other. something like this:
int i;
float l=0.0,q,l0=t*length; // separation length;
for (i=0;i<paths;i++)
{
q=l+path[i].length;
if (q>=l0)
{
// split/render path[i] to < 0,l-l0> with color1
// split/render path[i] to <l-l0,q-l0> with color2
// if you need split parameter in <0,1> then =(l-l0)/path[i].length;
i++; break;
}
else
{
//render path[i] with color1
}
l=q;
}
for (;i<paths;i++)
{
//render path[i] with color2
}
use backbuffer for speedup
so render whole path with color1 to some bitmap. On each animation step just render the newly added color1 stuff. And on each redraw just copy the bitmap to screen instead of rendering the same geometry over and over. Of coarse if you have zoom/pan/resize capabilities you need to redraw the bitmap fully on each of those changes ...

Image Comparison in Android.

I'm Android learner. I was trying Image Comparison. It showed the images are not same for the same image of different size so I rescaled the image using createScaledBitmap(Bitmap src, int dstWidth, int dstHeight, boolean filter). Even after rescaling, when I compared the result I got was both images are different. Please help me out.
bmpimg1 = BitmapFactory.decodeFile(path1);
bmpimg2 = BitmapFactory.decodeFile(path2);
int bm1Height = bmpimg1.getHeight();
int bm1Width = bmpimg1.getWidth();
int bm1Res = bm1Height * bm1Width;
int bm2Height = bmpimg2.getHeight();
int bm2Width = bmpimg2.getWidth();
int bm2Res = bm2Height * bm2Width;
if(bm1Res==bm2Res){
Toast.makeText(getApplicationContext(), "Both Images Same Size", Toast.LENGTH_SHORT).show();
if(bmpimg1.sameAs(bmpimg2)){
Toast.makeText(getApplicationContext(), "Same", Toast.LENGTH_LONG).show();
}else{
Toast.makeText(getApplicationContext(), "Not Same", Toast.LENGTH_LONG).show();
}
}
if (bm1Res > bm2Res)
{
Bitmap rbm1 = Bitmap.createScaledBitmap(bmpimg1, bmpimg2.getWidth(), bmpimg2.getHeight(), true);
ImageView imageView1 = (ImageView) findViewById(R.id.image1);
imageView1.setImageBitmap(rbm1);
Toast.makeText(getApplicationContext(), "Image1 has to be Scaled", Toast.LENGTH_SHORT).show();
if(rbm1.sameAs(bmpimg2)){
Toast.makeText(getApplicationContext(), "Same", Toast.LENGTH_LONG).show();
}else{
Toast.makeText(getApplicationContext(), "Not Same", Toast.LENGTH_LONG).show();
}
}
if(bm1Res<bm2Res)
{
Bitmap rbm2 = Bitmap.createScaledBitmap(bmpimg2, bmpimg1.getWidth(), bmpimg1.getHeight(), true);
ImageView imageView2 = (ImageView) findViewById(R.id.image2);
imageView2.setImageBitmap(rbm2);
Toast.makeText(getApplicationContext(), "Image2 has to be Scaled", Toast.LENGTH_SHORT).show();
if(bmpimg1.sameAs(rbm2)) {
Toast.makeText(getApplicationContext(), "Same", Toast.LENGTH_LONG).show();
}else{
Toast.makeText(getApplicationContext(), "Not Same", Toast.LENGTH_LONG).show();
}
}
Updated Answer
In order to answer the question in your comment, there are various approaches - it depends what you are actually trying to achieve... do you need to detect images that have been rotated relative to each other for example, or blurred, or smoothed, or tampered with. Some methods are...
Perceptual Hashing - you create a hash for all your images and calculate the distance between images. See here and also the comment about pHash.
Mean Colour - you calculate the mean (or average) colour of your images and compare the means - this method is quite simple.
RMSE or similar - you calculate the Root Mean Squre Error for all pixels and look for a low value to indicate images are similar. This method and all the ones above are easily done with ImageMagick. See, and vote for, Kurt's (#KurtPfeifle) excellent, thorough answer here.
Features - you find shapes and features in your image and compare those - try Googling "SIFT".
Original Answer
It's not a problem of your code, it is a fundamental issue of information loss. If you resize an image down to a smaller size, in general you will lose information since the smaller image cannot contain the same amount of information as a larger one. There are many things that could be going on...
Colour Loss
Imagine you have a lovely big 1000x1000 image with a smooth gradient and correspondingly millions of colours like this:
If you now resize it down to an image of 32x32, it can now only contain 1,024 colours as a maximum, so when you resize it up again you might get something like this:
And now you can see that banding has happened - where the colours have clumped together into the smaller number of colours that the smaller image can hold.
Format
When you resize an image, the program that does it may change from a true-colour 24 bits per pixel image to a palettised image with just 256, or fewer, colours. The image may look the same, but it won't necessarily compare identically.
Also related to format, you may resize an image and usea different image format that cannot contain all the features of the original and they will be lost when you resize up again afterwards. For example, your original image may be a PNG with transparency, and you may have resized it down to a JPEG which cannot contain transparency, and then resize back up to a PNG and it will be lost.
Resolution/Detail
If you have some sharp text, or details in your image, they may be lost. Say you start with this:
and resize down and then back up again, you may get this because the smaller image cannot hold the details. Again, this will mean that your comparison will fail to spot that they are the same image.
Likewise with simple lines..
and downsized and re-upsized

In Processing, how can I save part of the window as an image?

I am using Processing under Fedora 20, and I want to display an image of the extending tracks of objects moving across part of the screen, with each object displayed at its current position at the end of the track. To avoid having to record all the co-ordinates of the tracks, I usesave("image.png"); to save the tracks so far, then draw the objects. In the next frame I use img = loadImage("image.png"); to restore the tracks made so far, without the objects, which would still be in their previous positions.. I extend the tracks to their new positions, then usesave("image.png"); to save the extended tracks, still without the objects, ready for the next loop round. Then I draw the objects in their new positions at the end of their extended tracks. In this way successive loops show the objects advancing, with their previous positions as tracks behind them.
This has worked well in tests where the image is the whole frame, but now I need to put that display in a corner of the whole frame, and leave the rest unchanged. I expect that createImage(...) will be the answer, but I cannot find any details of how to to so.
A similar question asked here has this recommendation: "The PImage class contains a save() function that exports to file. The API should be your first stop for questions like this." Of course I've looked at that API, but I don't think it helps here, unless I have to create the image to save pixel by pixel, in which case I would expect it to slow things down a lot.
So my question is: in Processing can I save and restore just part of the frame as an image, without affecting the rest of the frame?
I have continued to research this. It seems strange to me that I can find oodles of sketch references, tutorials, and examples, that save and load the entire frame, but no easy way of saving and restoring just part of the frame as an image. I could probably do it using Pimage but that appears to require an awful lot of image. in front of everything to be drawn there.
I have got round it with a kludge: I created a mask image (see this Processing reference) the size of the whole frame. The mask is defined as grey areas representing opacity, so that white, zero opacity (0), is transparent and black, fully opaque (255) completely conceals the background image, thus:
{ size (1280,800);
background(0); // whole frame is transparent..
fill(255); // ..and..
rect(680,0,600,600); // ..smaller image area is now opaque
save("[path to sketch]/mask01.jpg");
}
void draw(){}
Then in my main code I use:
PImage img, mimg;
img = loadImage("image4.png"); // The image I want to see ..
// .. including the rest of the frame which would obscure previous work
mimg = loadImage("mask01.jpg"); // create the mask
//apply the mask, allowing previous work to show though
img.mask(mimg);
// display the masked image
image(img, 0, 0);
I will accept this as an answer if no better suggestion is made.
void setup(){
size(640, 480);
background(0);
noStroke();
fill(255);
rect(40, 150, 200, 100);
}
void draw(){
}
void mousePressed(){
PImage img =get(40, 150, 200, 100);
img.save("test.jpg");
}
Old news, but here's an answer: you can use the pixel array and math.
Let's say that this is your viewport:
You can use loadPixels(); to fill the pixels[] array with the current content of the viewport, then fish the pixels you want from this array.
In the given example, here's a way to filter the unwanted pixels:
void exportImage() {
// creating the image to the "desired size"
PImage img = createImage(600, 900, RGB);
loadPixels();
int index = 0;
for(int i=0; i<pixels.length; i++) {
// filtering the unwanted first 200 pixels on every row
// remember that the pixels[] array is 1 dimensional, so some math are unavoidable. For this simple example I use the modulo operator.
if (i % width >= 200) { // "magic numbers" are bad, remember. This is only a simplification.
img.pixels[index] = pixels[i];
index++;
}
}
img.updatePixels();
img.save("test.png");
}
It may be too late to help you, but maybe someone else will need this. Either way, have fun!

opencv displaying an image over a video frame (with chessboard) - rotation of the image

I didn't know what title would correctly describe my problem, I hope this one is not confusing.
I started my adventure with OpenCV a few days ago. Until today I managed to find a chessboard in a live stream from my internet camera and display a resized image on it. My next goal is to make the program rotate the image while I'm rotating the chessboard. Unfortunately I have no idea how to do that, I saw many codes, many examples but none of them helped. My last goal is to do something like this: http://www.youtube.com/watch?v=APxgPYZOd0I (I can't get anything from his code, he uses Qt, I only met it once and I'm not interested in it - yet).
Here is my code:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
vector<Point3f> Create3DChessboardCorners(Size boardSize, float squareSize);
int main(int argc, char* argv[])
{
Size boardSize(6,9);
float squareSize=1.f;
namedWindow("Viewer");
namedWindow("zdjecie");
namedWindow("changed");
Mat zdjecie=imread("D:\\Studia\\Programy\\cvTest\\Debug\\przyklad.JPG");
resize(zdjecie, zdjecie, Size(200,150));
Mat changed=Mat(zdjecie);
imshow("zdjecie", changed);
vector<Point2f> corners;
VideoCapture video(0);
cout<<"video height: "<<video.get(CV_CAP_PROP_FRAME_HEIGHT)<<endl;
cout<<"video width: "<<video.get(CV_CAP_PROP_FRAME_WIDTH)<<endl;
Mat frame;
bool found;
Point2f src[4];
Point2f dst[4];
Mat perspMat;
while(1)
{
video>>frame;
found=findChessboardCorners(frame, boardSize, corners, CALIB_CB_FAST_CHECK);
changed=Mat(zdjecie);
// drawChessboardCorners(frame, boardSize, Mat(corners), found);
if(found)
{
line(frame, corners[0], corners[5], Scalar(0,0,255));
line(frame, corners[0], corners[48], Scalar(0,0,255));
src[0].x=0;
src[0].y=0;
src[1].x=zdjecie.cols;
src[1].y=0;
src[2].x=zdjecie.cols;
src[2].y=zdjecie.rows;
src[3].x=0;
src[3].y=zdjecie.rows;
dst[0].x=corners[0].x;
dst[0].y=corners[0].y;
dst[1].x=corners[boardSize.width-1].x;
dst[1].y=corners[boardSize.width-1].y;
dst[2].x=corners[boardSize.width*boardSize.height-1].x;
dst[2].x=corners[boardSize.width*boardSize.height-1].y;
dst[3].x=corners[boardSize.width*(boardSize.height-1)].x;
dst[3].y=corners[boardSize.width*(boardSize.height-1)].y;
perspMat=getPerspectiveTransform(src, dst);
warpPerspective(zdjecie, changed, perspMat, frame.size());
}
imshow("changed", changed);
imshow("Viewer", frame);
if(waitKey(20)!=-1)
break;
}
return 0;
}
I was trying to understand this code: http://dsynflo.blogspot.com/2010/06/simplar-augmented-reality-for-opencv.html
but nothing helped. It didn't even work for me - the image from my webcamera was inverted, frames were changing every few seconds and nothing was being displayed.
So what I ask for is not a whole solution. If someone explained me the way how to do it, I'd be glad. I want to understand it from basics and I just don't know where to go now. I spent much time on trying to solve it, if I didn't, I would not bother you with my problem.
I'm lookin forward for you answers !
Greetings,
Daniel
EDIT:
I changed the code. Now you can see how I try to warp the perspective on my image. Firstly I thought that the reason why after calling warpPerspective function my image Mat changed (it's Mat krzywe, I changed its name so it won't be confusing) is black, is the fact I don't start warping perspective everytime from the basic photo. So I added the line
changed=Mat(zdjecie)
I guess my problem is pretty simply to solve but I really have no idea now.
As I see it, there are two problems. First, the coordinates you are warping from and to are wrong. The source coordinates are simply the corners of the source image:
src[0].x=0;
src[0].y=0;
src[1].x=zdjecie.cols;
src[1].y=0;
src[2].x=zdjecie.cols;
src[2].y=zdjecie.rows;
src[3].x=0;
src[3].y=zdjecie.rows;
The destination coordinates must be the corners points of the found chessboard points. Which points are the corner points changes with the size of the chessboard. Since my chessboard had different dimension I tried to make it adaptive by taking the size of the board into account:
dst[0].x=corners[0].x;
dst[0].y=corners[0].y;
dst[1].x=corners[ boardSize.width-1 ].x;
dst[1].y=corners[ boardSize.width-1 ].y;
dst[2].x=corners[ boardSize.width * boardSize.height - 1 ].x;
dst[2].y=corners[ boardSize.width * boardSize.height - 1 ].y;
dst[3].x=corners[ boardSize.width * (boardSize.height-1) ].x;
dst[3].y=corners[ boardSize.width * (boardSize.height-1) ].y;
The second problem is, that you warp into an image that has only the size of the source image. But it needs to have the size of the target image:
warpPerspective(zdjecie, changed, perspMat, frame.size());
Another thing I found peculiar is, that our imshow("Viewer", frame); call is inside the if-clause. This means, the image is only updated if a chessboard was found. I am not sure, if that was intended.
Now you should have one window showing the video and another window showing the transformed source image. Your next step would now be to blend those both images together.
Update:
Here is how I merge the two images:
Mat mask = changed > 0;
Mat merged = frame.clone();
changed.copyTo(merged,mask);
The mask matrix is true for all pixels that are not zero in the warped image. Then all non-zero pixels from the warped image are copied into the frame.

Kinect Color and Depth Stream Performance

I am developing an XNA game. In my game I am using Kinect's Color Stream and Depth Stream to get the image of the player only. For this cause I check the depth pixels and find the pixels with PlayerIndex > 0. Then I map these depth points to color points with MapDepthPointToColorPoint method and get the colors of these pixels from color video.
This works normally but the performance is very bad (especially when this is for a game). When I closed color stream and set the pixels with black color (I mean pixels with player index), it's all smooth. But when I enable color stream it is not very effective.
I tried 2 things in AllFramesReady function for this :
1-
using (ColorImageFrame colorVideoFrame = imageFrames.OpenColorImageFrame())
{
//color logic
}
using (DepthImageFrame depthVideoFrame = imageFrames.OpenDepthImageFrame())
{
//depth logic
}
and
2-
using (ColorImageFrame colorVideoFrame = imageFrames.OpenColorImageFrame())
{
colorReceived=true;
}
if (colorReceived){
//color logic
}
using (DepthImageFrame depthVideoFrame = imageFrames.OpenDepthImageFrame())
{
depthReceived=true;
}
if (depthReceived){
//depth logic
}
The second one seems to have a better performance because it applies color and depth logic outside of using blocks and returns resources to Kinect as soon as possible. But when the second player comes in performance decreases drastically. But sometimes the image of the player just disappears for 1 or 2 frames when I use the second choice.
What more can I do to increase performance of color and depth streams? Thanks for any help.

Resources