I'm working on a feature in codenameone application, wherein user can capture a photo and save/display it with date,time details on it. Can these details be stored as part of the image? how can I display these details on the image?
You need to use a mutable image and draw the original image on it then draw the date and time/save it.
Image img = Image.create(original.getWidth(), original.getHeight());
Graphics g = img.getGraphics();
g.drawImage(original, 0, 0);
g.setColor(0xffffff);
g.setFont(myFont);
g.drawString(0, 0, todaysDateAndTime);
ImageIO io = ImageIO.getInstance();
io.save(....);
I did the code above from my head so it might have some issues and didn't implement the call to save but the basics should point you in the right direction.
Related
I'm a beginner in Unity. And so far of what I read, playerprefs could store image in a byte form. Still, I'm having trouble in this one. Because, I'm having an error in converting texture2d to bytes. In which, I use this code
Texture tex = Resources.Load("Sprites/white_black") as Texture2D;
byte[] texbyte = tex.EncodeToPNG();
// ^ this line always result as NullReferenceException on my console.
I have two scenes: Scene1 is images that were going to save to playerprefs which is an example below:
While in Scene 2, all the images that were saved in playerprefs will be show using a button like in below pic:
Also, if you could recommend me other solution I'll search on it. Thank you.
You should not save images in PlayerPrefs but you could just save the name of the sprite to PlayerPrefs, and then load it in another scene from resources, like so: Resources.Load(spriteName);
My requirement is i need to detect human face from the given cctv image. In the cctv image there will be unnecessary objects which need to be removed.if the obtained face image is blur needs to improve the quality as well
currently we are trying with opencv API, the code as follows
CascadeClassifier cascadeClassifier = new
CascadeClassifier("haarcascade_profileface.xml");
Mat image=Highgui.imread("testing.jpg");
MatOfRect bodyDetections = new MatOfRect();
cascadeClassifier.detectMultiScale(image, bodyDetections);
for (Rect rect : bodyDetections.toArray()) {
BufferedImage croppedImage = originalPic.getSubimage(rect.x,
rect.y,rect.width,rect.height); **unable to detect the body coordinates
here**
}
In the above approach multiple objects of the image are detected as face,which is error.
In the cctvc image if there is only side face how to extract the complete face ?
Pls suggest the best possible way to achieve my requirement.
Thanks
IMGen
You may want to look at the new AWS solution
https://aws.amazon.com/blogs/aws/category/amazon-rekognition/
Sorry for lengthier explanation. As am new to Open want to give more details with example.
My requirement is to find the delta of 2 static images, for this am using the following technique:
cv::Mat prevImg = cv::imread("prev.bmp");
cv::Mat currImg = cv::imread("curr.bmp");
cv::Mat deltaImg;
cv::absdiff(prevImg,currImg,deltaImg);
cv::namedWindow("image", CV_WINDOW_NORMAL);
cv::absdiff(prevImg,currImg,deltaImg);
cv::imshow("image", deltaImg);
And in the deltaImg, am getting the difference between the images, but it includes the background of the first image also. I know i have to remove the background using BackgroundSubtractorMOG2, but am unable to understand this class usage as most of the examples are based on webcamera captures.
Please note that my images are static (Screen shots of the desktop activity).
Please guide me in resolving this issue, some sample code will be helpful.
Note I want to calculate delta in RGB.
Detailed Explination:
Images are at : https://picasaweb.google.com/105653560142316168741/OpenCV?authkey=Gv1sRgCLesjvLEjNXzZg#
Prev.bmp: The previous screen shot of my dektop
curr.bmp: The current screen shot of my desktop
The delta between the prev.bmp and curr.bmp, should be the startup menu image only, please find the image below:
The delta image should contain only the startup menu, but even contains the background image of the prev.bmp, this background i want to remove.
Thanks in advance.
After computing cv::absdiff your image contains non-zero values for each pixel that changed it's value. So you want to use all image regions that changed.
cv::Mat deltaImg;
cv::absdiff(currImg,prevImg,deltaImg);
cv::Mat grayscale;
cv::cvtColor(deltaImg, grayscale, CV_BGR2GRAY);
// create a mask that includes all pixel that changed their value
cv::Mat mask = grayscale>0;
cv::Mat output;
currImg.copyTo(output,mask);
Here are sample images:
previous:
current:
mask:
output:
and here is an additional image for the deltaImg before computing the mask:
Problems occur if foreground pixels have the same value as background pixel but belong to some other 'objects'. You can use cv::dilate operator (followed by cv::erode) to fill single pixel gaps. Or you might want to extract the rectangle of the start menu if you are not interested in all the other parts of the image that changed, too.
I am implementing one iphone/Android application using Titanium in which I want to crop the image.Please check below flow which I have implemented.
1)Open Camara
2)Take photo
3)Display photo in the application
4)Crop photo with some area
I have implemented above 3 steps but for croping I am facing issue.I have tried below code for croping.
var file = Titanium.Filesystem.getFile(header1).toBlob();
var cropped = file.imageAsCropped(10, 10, 290, 232);
var croImg = Ti.UI.createImageView({
image: file,
top:0,
left : 0,
width:290,
height:232
});
win1.add(croImg);
But i can't get sucess. IF you give me advice then would be appriciate.
Thanks
Theres another way to do it with a separate imageView smaller than the original images and setting what portion to display. check out this link that gives a good example:
http://developer.appcelerator.com/question/72431/crop-imageview
Is it possible to output images so that they all will be inside a single window?
Before, I used to output data using only opencv functions:
cvNamedWindow("Image 1");
cvShowImage("Image 1", img);
So I change image, then call: cvShowImage function and so on.
But If I want to look at more than one image, then every new image needs its own window to be shown there And what I want is to put every such an output opencv's window inside one big main window.
Is it possible to do it? And how?
You will have to construct a new image and place each img into it. I don't think there's a builtin function like MATLAB's subplot. I recommend using the ROI functions to quickly copy an image into a region-of-interest (ROI) of the big image (which holds the others).
You can show as many images as you want on a single window using hconcat function.
Let us suppose your original image was
Mat frame;
Now clone or make a copy of this image using
Mat frame1 = frame.clone();//or
Mat frame2;
frame.copyTo(frame1);
Now let us suppose your output images are
Mat img1,img2,img3,img4;
Now if you want to show images horizontally, use
hconcat(img1,img2,frame1)//hconcat(input_image1,input_image2,destination_image);
And if you want to show images vertically, use
frame2.push_back(img1);//main_image.push_back(image_to_be_shown_below);
This process processess images one at a time, so if you want to show 4 images side by side, you have to call this function 4 times as in
hconcat(img1,img2,frame1);
hconcat(frame1,img3,frame1);
hconcat(frame1,img4,frame1);
imshow("Final Image",frame1);
NOTE:
Clone process is done because the images have to be of same size.
Enjoy...