I have a program where I have a slider, and when i move it up or down (left or right) the color changes gradually. Sadly I am not able to achieve this. My colors change yes, but it is very sudden! I have the 7 colors of the rainbow on seperate .png files and when I scroll the respective one comes up. I was wondering if there was anything I could do to make the colors morph or blend into each other better to make the transaction appear muuch more smoothly.
Thank you
UPDATE
if(self.slider.value > 7 (
{
self.label.text=#"red";
//self.imgView.backgroundColor=[UIColor redColor];
// self.imgView.backgroundColor=[UIColor colorWithPatternImage:#"redPicture"];
self.view.backgroundColor=[UIColor colorWithRed:146 green:50 blue:146 alpha:1];
}
This is going to be a generalized answer because I'm not going to write your code for you (not least because I have never written a letter of xcode in my life), but this should put you on the right track.
You want a continuous spectrum of color, so that should tell you right off the bat that using a series of if statements is the wrong way to go. Instead, you should calculate the color you want by doing some math with the slider value directly.
You haven't told me what your slider range is and whether it's discrete, so for the purposes of this answer let's call the lowest value min and the highest value max, just to keep things general. So your total range is max - min. Let's express the value of your slider as a percentage along this range; we can calculate this as (self.slider.value - min) / (max - min). (For instance, for a slider that goes from 0 to 50, a slider value of 37 gives you (37-0)/(50-0) = 0.74.)
So now you should have a decimal value between 0 and 1, which you can map along the Hue-Saturation-Value color scale. I don't know if xcode has a HSV method directly (this answer has some code which might be helpful), but if not it's pretty easy to convert HSV to RGB.
Related
This is a small equation that's giving me a headache, I'm close to solving but- Ugh.
I'll try to be prompt.
I have this:
As you see it is a slider that goes from 0.1x to 3x Difficulty.
I have other sliders like this, for audio for example that just go from 0% to 100%.
That works fine.
However, with a minimum value greater than 0 my math breaks a bit and I'm stuck not being able to
perfectly slide the bar all the way to the bottom because it isn't 0 and it is 0.1 instead.
I want to make it to where even if the minimum value isn't 0, the bar goes all the way to empty.
Here is the relevant equations/calculations at play:
var percent = val/val_max
var adjustment = ((x2-x1)*val_min)-((((x2-x1)*val_min)*percent)*val_min)
var x2_final = (x1+((x2-x1)*percent))-adjustment
percent is the percentage of the current value relative to the max value (0.0 to 1.0)
adjustment is trying to find how much to additionally add/remove from x2_final based on the current value to keep the slider properly scaled when the minimum value isn't 0. (This is where the problem is)
x2_final is the final (in pixels) coordinate where the slider should stop based on the previous calculations.
Initially the slider would over fill when full (that was fixed by the current adjustments) but now the slider doesn't go all the way empty and leaves a "0.1" worth of slider.
I don't usually use forums or stackoverflow as I try to figure out things on my own but so I apologize if my explanation needs work.
Here is what the slider looks like when I set it as low as it will go:
Also, if I have more math related problems, are there any good tools I can use to help simulate my calculations like this so people can run it for them selves or something?
Thanks in advance!
Solved it!
So my equation was a bit wrong for the adjustment.
As now it looks like this:
var percent = val/val_max
var percent_min = val_min/val_max
var adjustment = ((x2-x1)*percent_min)-(((x2-x1)*percent_min)*percent)
var x2_final = (x1+((x2-x1)*percent))-adjustment
And now the slider properly fills to full as well as empties to the bottom,
regardless of what the minimum value is.
But I also noticed then the bar as I was sliding it wasn't following my mouse.
So to fix that I had to go later in my code where I update the current value of the slider as the user clicks and changes it...
var mouse_x_percent = round2((mouse_x-x1+adjustment)/(x2-x1+adjustment),2)
Just had to add the adjustment to both sides of that calculation (getting the mouse_x's percentage relative to the beginning and end of the slider itself which would then be used to calculate the new value by multiplying the mouse_x_percent with the max value).
(Round2() takes two numbers, the first being the number to round, and the second to what decimal place)
Always love solving a problem, I hope this helps someone else.
I finally solved my problem here with lennon310.
I have a picture of thousands of thin peaks in Time-Frequency picture.
I cannot see all the same time in one picture.
Depending on the physical width of my time window, some windows appear and some come visible.
Pictures of my data which I plot by imagesc
All pictures are from the same data points T, F, B.
How can you plot all peaks at once in a picture in Matlab?
You need to resize the image using resampling to prevent the aliasing effect (that craigim described as unavoidable).
For example, the MATLAB imresize function can perform anti-aliasing. Don't use the "nearest" resize method, that's what you have now.
Extension to #BenVoigt's answer
My try
B = abs(B);
F1 = filter2(B,T); % you need a different filter kernel because resolution is lower
T = filter2(B,F);
F = F1;
image1 = imagesc(B);
display1 = imresize(image1, [600 600], 'bilinear');
imshow(T*t, F*fs, display1);
where are some problems.
I get again picture where the resolution is not enough
2nd Extension to BenVoigt's answer
My suggestion for one kernel filter is with convolution of relative random error
data(find(data ~= 0)) = sin(pi .* data(find(data ~= 0))) ./ (pi*data(find(data ~= 0)));
data(find(data == 0)) = 1; % removing lastly the discontinuity
data1 = data + 0.0000001 * mean(abs(data(:))) * randn(size(data));
data = conv(data, data1);
Is this what BenVoigt means by the kernel filter for distribution?
This gives results like
where the resolution is still a problem.
The central peaks tend to multiply easily if I resize the window.
I had old code active in the above picture but it did not change the result.
The above code is still not enough for the kernel filter of the display.
Probably, some filter functions has to be applied to the time and frequency axis separately still, something like:
F1 = filter2(B,T); % you need a different kernel filter because resolution is lower
T = filter2(B,F);
F = F1;
These filters mess up the values on the both axis.
I need to understand them better to fix this.
But first to understand if they are the right way to go.
The figure has be resized still.
The size of the data was 5001x1 double and those of F and T 13635x1 double.
So I think I should resize lastly after setting axis, labels and title by
imresize(image, [13635 13635], 'bilinear');
since the distirbution is bilinear.
3rd Extension to BenVoigt's answer
I plot the picture now by
imagesc([0 numel(T)], [0 numel(F)], B);
I have a big aliasing problem in my pictures.
Probably, something like this should be to manipulate the Time-Frequency Representation
T = filter2(B,t); % you need a different filter kernel because resolution is lower
F = filter2(B,fs);
4th extension to BenVoigt's answer and comment
I remove the filters and addition of random relative errors.
I set the size of T,F,B, and run
imagesc([0 numel(T)], [0 numel(F)], B, [0 numel(B)])
I get still with significant aliasing but different picture
This isn't being flippant, but I think the only way is to get a wider monitor with a higher pixel density. MATLAB and your video card can only show so many pixels on the screen, and must decide which to show and which to leave out. The data is still there, it just isn't getting displayed. Since you have a lot of very narrow lines, some of them are going to get skipped when decisions are made as to which pixels to light up. Changing the time window size changes which lines get aliased away and which ones get lucky enough to show up on the screen.
My suggestions are, in no particular order: convolute your lines with a Gaussian along the time axis to broaden them, thus increasing the likelihood that part of the peak will appear on the screen. Print them out on a 600 dpi printer and see if they appear. Make several plots, each zooming in on a separate time window.
3rd Extension to BenVoigt's answer
My attempt to tell imagesc how big to make the image, instead of letting to use the monitor size, so it won't away data.
imagesc(T*t, F*fs, B, [50 50]);
% imagesc(T*t, F*fs, B');
% This must be last
imresize(image, 'bilinear', [64 30],);
I am not sure where I should specify the amount of pixels in x-axis and y-axis, either in the command imagesc or imresize.
What is the correct way of resizing the image here?
I am working on a project in MATLAB which will extract background from an image, like if this is an image
it should give me locations/coordinates of background(blue part) or person's image, so far I have calculated
1) edges using canny
2) connected component
is there any detailed work, algorithm or paper on it ? so I can do it.
Edit
Problem I am facing is if I detect edges, it gives me binary image, so if I assume that all pixels who have value 0 (black color), is my background then how would I differentiate that I(r,c) is the part of person or part of background ?
Note that this is just one way to do it, but it should work.
Assuming you can make a matrix with the following values:
1 if it is (in the range of) your background color
0 otherwise
And assuming the background is only 'outside' the person (though it may still work if there is just a bit of hair around the background), then a simple way to check if something is the background would be to
observe the neighborhood of each pixel in the matrix
if the average value is high enough (say over 0.2) then assume it is a background pixel, otherwise assume it is a non-background pixel.
Store the result in your new matrix and you have all the locations of background pixels
So far it is quite straightforward and does not even use the fact that you already calculated the edges. Now with those edges you can make the following improvement:
If a pixel is far enough 'inside' the edges (simpler: close enough to the center of them), do not consider it a candidate for background. This should help in case someone has big blue eyes.
I’m trying to calculate an average value of one color over the whole image in order to determine how color, saturation or intencity or eny other value describing this changes between frmaes of the video.
However i would like to get just one value that will describe whole frame (and sigle, chosen color in it). Calculating simple average value of color in frame gives me very small differences between video frames, just 2-3 on a 0..255 space.
Is there any other method to determine color of the image other than histogram which as i understand will give me more than one value describing single frame.
Which library are you using for image processing? If it's OpenCV (or Matlab) then the steps here will be quite easy. Otherwise you'd need to look around and experiment a bit.
Use a Mean Shift filter on RGB (or gray, whichever) to cluster the colors in the image - nearly similar colors are clustered together. This lessens the number of colors to deal with.
Change to gray-level and compute a frequency histogram with bins [0...255] of pixel values that are present in the image
The highest frequency - the median - will correspond to the bin (color) that is present the most. The frequency of each bin will give you the no. of pixels of the color that is present in the frame.
Take the median value as the single color to describe your frame - the color present in the largest amount in the frame.
The key point here is if the above steps are fast enough for realtime video. You'd have to try to find out I guess.
Worst case scenario, you could loop over all the pixels in the image and do a count. Not sure what you are using programming wise but I use Python with Numpy something similar to this. Where pb is a gtk pixbuf with my image in it.
def pull_color_out(self, pb, *args):
counter = 0
dat = pb.get_pixels_array().copy()
for y in range(0,pb.get_width()):
for x in range(0,pb.get_height()):
p = dat[x][y]
#counts pure red pixels
if p[1] = 255 and p[2] = 0 and p[3] = 0:
counter += 1
return counter
Other than that, I would normally use a histogram and get the data I need from that. Mainly, this will not be your fastest option, especially for a video, but if you have time or just a few frames then hack away :P
I would like to create a color generator based on random numbers, which might differ just slightly, but I need colors to be easily recognizable from each other. I was thinking about generation then in a rgb format which would be probably easiest. I'm afraid simply multiplying given arguments wouldn't do very well. What algorithm do you suggest using? Also, second generated color should not be the same as previous one, but I don't want to store them - nor multiplying with (micro)time would do well since the scripts' parts are usually faster.
If you wanted truly random colors, then generating the same color 10 times in a row would be acceptable. To get values that are perceived as random, you have to strip out true randomness.
The easiest way to do this is probably with a cycling index into a list of colors. Say you pick web colors, a list of 216 colors. Each time you want a new color, add a random number to the index, wrapping as needed. To prevent getting the same color, limit random numbers to less than the number of colors.
colorIndex = ( colorIndex + ( random() % 100 ) + 1 ) % 216;
If you do not want a lookup table, then generate HSB colors but limit the hue to part of the circle that does not include the previous color. If the previous hue was 60 degrees, then pick the next hue above 90 or below 30 degrees, for example. You probably want to limit the saturation and brightness to be above 50% or so.
There are 255*255*255 possible combination of colors that you can do if you generate a random number for each value of RBG.
I wouldn't be afraid of color collision, but if you want to make sure that there will be no collisions whatsoever you will need to record the previous color.
This simple pseudo code illustrates how to avoid some necessary comparisons
if red is not equals previous_red then
if blue is not equals previous_blue then
if green is not equals previous_green then
use this color
else
generate again
Not an answer, but just to share a nice picture of xkcd:
It's not easy to model what constitutes "easily recognizable colors". The euclidean distance of the R,G,B components of a color is a rough measure, but the human eye is not an RGB color receptor! E.g. if a pair of colors has some euclidean distance between them, and another pair of colors have the exact distance between them, you don't really know whether each pair color is equally distinguishable, unless you see them!
For a true random number generator, have a look here. I'm sure you can bound it within a range of numbers too.
Let me sugest this:
Create a pseudo aleatory number algorythm (Type Google to find thowsands) and create an array with the colors.
You didn't specified the language, byt anyway you can have something like:
colors = [0xFF0000, 0x00FF00, 0x0000FF]
Red, Green and Blue
And you can have something like:
position = fn_random();
draw(colors[position]);
Hope it's what you are looking for...
Let me know!!