I am doing a project,where I need to determine if the image is self taken or not? I am confused whether to use machine learning approach or image processing ? Machine learning approach may not give me very effective results.I am looking for image processing method.Can anyone help me out here..?? I can detect the faces in the image but that is not enough.
Machine learning will work way better in my opinion. With few exemples you should get a correct result. But it won't be perfect.
If you want something more occurate you will need other detailed. For exemple if it is a smartphone application you know if the front camera was used. Maybe another great source of data is the accelerometer because selfies are not taken the same way of regular picture.
If you have faces in the picture and another relevant indicator you probably can state that the picture is a selfie.
I am trying to rotate a UIButton so it looks more like a diamond instead of a square. I have searched a lot on the site and could not find anything that could help me. I am having a lot of trouble with this and need some help. I think it will increase the appeal of my app when it is running. This is a problem I have had for a while and have been struggling with so anything can help me. I don't know if this is a built in feature in Xcode or if I need to do this programmatically.
Any suggestions would be greatly appreciated.
Thanks in advance.
You could simply apply an affine transform to the button. Something like this:
import Darwin
...
myButton.transform = CGAffineTransformMakeRotation(M_PI/2)
However, that will rotate the entire button, including the title. If that's not what you want then it gets more complicated.
I don`t know how to create an image slideshow with processing. Can someone please create a slideshow for me to use as a sample? And can you briefly explain the codes using"statements"?
Since, you didn't try anything and you don't provide any code, consider the below as a beginning:
Slideshow is a list/array of images appearing at the exact position in a
specific order. At any given time, only one image must be visible.
Also, consider PIMAGE data type and image() method.
You can find more in Processing reference.
And just be creative. There is not any rule to make a slideshow. You can do it in many different ways.
I've been trying for a long time to change my colormap of my images using a custom 256x3 colormap to switch the impression of a 'normal sighted' person to the one a person with deuteranopia (red-green-blindness) can see.
The colormap has already been created, but in no way I get to apply it to the original image.
The code
load('ColormapsDefVis.mat')
fig=figure
a=imread('Regenbogen.png');
[b map]=rgb2ind(a,256);
c=ind2rgb(b, DeuteranopiaColorMap);
imshow(c);
did not work as well as
load('ColormapsDefVis.mat')
fig=figure
a=imread('Regenbogen.png');
imshow(a);
set(fig,'Colormap',DeuteranopiaColorMap)
did not.
Does anyone know how to change the custom colormap correctly?
I would appreciate your help very much!
The first peice of code works. You have to be making a mistake or interpreting the results wrongly.
I suggest you look a to a thing. Make sure you image is unit8 or single. Additionally, you probably dont what dithering to happen, so I suggest you do rgb2ind(a,256, 'nodither'). The results of the dithering may be fooling your eyes, but we can't know as you didn't post any image.
To convince yourself that rgb2ind works, see the code below. You should be able to test it in you computer.
img=imread('cameraman.tif');
indimg=img;
cmap=hsv(255); % colromap 1
cmap2=cmap(end:-1:1,:); % colromap 2
subplot(121);
c=ind2rgb(indimg,cmap );
imshow(c)
subplot(122);
c2=ind2rgb(indimg,cmap2 );
imshow(c2)
Thank you very much for your help Ander, the approach using colormaps was just not the efficient solution i was looking for. Although I am sure it would work too, there is a way easier solution for the problem :)
To receive the impression of being red-green-blind I took the idea that people suffering from this condition cannot distinguish between reddish and greenish objects. So my solution is to plot the mean from the red and green RGB-image channel in both channels to make them indistinguishable.
a=imread('peppers.png');
figure,
subplot(121)
imshow(a)
c=(a(:,:,1)+a(:,:,2))/2; %Mean value between channel red and green
a(:,:,1)=c; %Switch the red channel to the mean
a(:,:,2)=c; %Switch the green channel to the mean
subplot(122)
imshow(a)
Unfortunately, stackoverflow refuses me to upload images as I do not have enough reputation, but the code shall work using any RGB-image, for example the builtin demo image 'peppers.png'.
Hope this helps anyone else too!
I need help with OpenCV to solve the following problem:
I have some pictures of one place
Each photo compare with the first
in the photos except the first I need to draw the frames where there was a change
Can anyone advise me or recommend a prime example?
That sounds like a job for background substraction.
BackgroundSubtractorMOG2 is the class you will probably want.
I have not used it myself, but here is a tutorial