I have a problem in impyramid with matlab. I am trying to save one downscale version of a binary image and also a two downscaled version of this binary image. It is simple to do that in matlab as the following code shows:
scale1_2= impyramid(compressed_image, 'reduce');
scale1_4= impyramid(scale1_2, 'reduce');
So, an image with size 810x1080 is saved with 405x540 and 203x270 pixels. The problem I am facing is when I try to expand these two images back to have the same dimensions as before.
scaled_result1_2=impyramid(scale1_2,'expand');
scaled_result1_4=impyramid(impyramid(scale1_4,'expand'), 'expand');
So, it is expected that scaled_result1_2 and scaled_result1_4 are 810x1080 images again, but not:
>>size(scaled_result1_2)
809 1079
>>size(scaled_result1_4)
809 1077
I need these two images to have the same 810x1080 pixels again, but impyramid is not able to do this. If I resize these images with imresize will it perform the image pyramid decomposition by upscaling and blurring the image? which method (interpolation) should I have to use to have a similar result?
If you actually open up impyramid and see the source code, it boils down to an imresize call. Specifically, this is what happens when you use expand when calling impyramid when A is defined as the image:
M = size(A,1);
N = size(A,2);
scaleFactor = 2;
outputSize = 2*[M N] - 1;
kernel = makePiecewiseConstantFunction( ...
[1.25 0.75 0.25 -0.25 -0.75 -1.25 -Inf], ...
[0.0 0.125 0.5 0.75 0.5 0.125 0.0]);
kernelWidth = 3;
B = imresize(A, scaleFactor, {kernel, kernelWidth}, ...
'OutputSize', outputSize, 'Antialiasing', false);
As you can see, outputSize is defined as twice the image dimensions subtract 1, which is why you are off by 1 pixel per dimension. The function makePiecewiseConstantFunction is a local function that is defined in impyramid. I'll let you open it up and see that for yourself. Make sure this is defined before calling the above code.
Therefore, simply remove the subtraction of 1 to achieve what you want.
As such, call the above code, but change outputSize to:
outputSize = 2*[M N];
However, if you want to be adventurous, you can modify this source code yourself to take in a flag where if you set it to true, it won't subtract by 1 and false performs the subtraction. Therefore, you can modify the header of impyramid to do this:
function B = impyramid(A, direction, padding)
Then, at the beginning before any computation is done, you can do this:
if nargin == 2
padding = false;
end
This allows you to call impyramid without a third argument, which will default to no padding.
Once you're done, in the expand section of the if statement, you can do:
else
scaleFactor = 2;
outputSize = 2*[M N];
if ~padding %// Change
outputSize = outputSize - 1;
end
kernel = makePiecewiseConstantFunction( ...
[1.25 0.75 0.25 -0.25 -0.75 -1.25 -Inf], ...
[0.0 0.125 0.5 0.75 0.5 0.125 0.0]);
kernelWidth = 3;
end
The nested if statement then checks to see whether or not you want to allow the output image to be of size 2M x 2N or 2M - 1 x 2N - 1. As such, when you're done modifying the code, you can do:
scaled_result1_2 = impyramid(scale1_2, 'expand', true);
scaled_result1_4 = impyramid(impyramid(scale1_4,'expand', true), 'expand', true);
Related
Could you tell me how to take a range of an image? I want to do a gamma correction of "cameraman.tif" for pixels in range 0:120 gamma = 1.8, for pixels in range 120:255 gamma = 0.5
But all pixels go to first if statement so I am unable to apply second gamma.
a = imread('cameraman.tif');
gamma1 = 2;
gamma2 = 0.5;
N =size(a);
out = zeros(N);
for i=1:N
for j=1:N
temp=a(i,j);
if temp>0&temp<120
out(i,j)=temp.^2;
end
if temp>120&temp<=255
out(kx,ky)=temp.^0.5;
end
end
end
imshow(out)
Your second if statement uses access variables kx and ky.... I'm assuming you wanted to use i and j:
out(i,j)=temp.^0.5;
You also must make sure that the intensity is double precision for the square root to work. Therefore make sure that the intensity read in per location is cast to double, then convert back to uint8 when you're done. Actually, do the conversion after you sweep through the whole image.
for i=1:N
for j=1:N
temp=double(a(i,j)); % Change
if temp>0&temp<120
out(i,j)=temp.^2;
end
if temp>120&temp<=255
out(i,j)=temp.^0.5; % Change
end
end
end
out = uint8(out); % Change
kx and ky were set somewhere else in your code and are never changing, so this means that if and when the second if statement does happen, the setting of the gamma only happens at one spot only defined at kx and ky. My advice to you would be to write an actual function so that you aren't cross-contaminating variables in different workspaces. Encompassing this in a function would have given you an error immediately telling you that kx and ky are not defined.
BTW, I would rather you do this more efficiently without loops. You can very easily perform the same operations vectorized. However, this requires that you convert the image to double as the default type is uint8 for the Cameraman image. Therefore, use double to convert an image to double, do the gamma correction, then convert back using uint8:
a = double(imread('cameraman.tif'));
out = zeros(size(a));
out(a > 0 & a < 120) = a(a > 0 & a < 120).^2;
out(a >= 120 & a <= 255) = a((a >= 120 & a <= 255).^0.5;
out = uint8(out);
The first and second line of code are of course familiar. The third line of code finds a logical mask where we search for intensities between 0 and 120 exclusive. Once we find those values, we use the same logical mask to index into the original image and only access those values, square each value and set them in the same spatial locations at the output. The same can be said for the last line of code where you're searching between 120 and 255 but you are taking the square root instead. We finally convert to uint8 for display.
I need to do the following:
I have a fixed environment with a point in it
At each time step the point moves and I need to take a screenshot of the current status (environment + point)
What I do is
function getPixels(state)
fig = figure('visible','off')
hold all
plot_environment() % calls patch and other stuff
plot(state(1),state(2),'r+')
f = getframe();
data = f.cdata;
close(fig)
The problem is that it is very slow (0.6s which for me is really too much).
I tried using persistent fig and I can go down to 0.4s, still too much.
I read about using print or hardcopy, but it did not help. Even reducing the number of pixels by -r20 (1/5 of my default size) did not speed it up.
Any suggestion? Is there a faster way to get the pixels?
EDIT: ADDITIONAL DETAILS
The state is just a 2d point.
The environment is defined by some fixed known variables used to draw shapes. More specifically I have some points
points = [c11 c12
c21 c22
.....]
used to patch rectangles, circles and triangles. For this I use patch and circles.
So in the end I want to plot everything together and get the resulting pixels. Is there a way to do it without getframe or a way to speed it up?
COMPLETE EXAMPLE
It requires circles.
Launch tic, getPixels([0.1, 0.2]'); toc
It takes 0.43s on average. The getframe command alone takes 0.29s.
function data = getPixels(state)
fig = figure('visible','off');
hold all
c1 = [0.1 0.75;
0.45 0.75];
c2 = [0.45 0.4;
0.45 0.8];
radius = 0.1;
grey = [0.4,0.4,0.4];
% Circles
p = [c1; c2];
circles(p(:,1), p(:,2), radius, 'color', grey, 'edgecolor', grey)
% Rectangles
patch([0.1 0.45 0.45 0.1], [0.65 0.65 0.85 0.85], grey, 'EdgeAlpha', 0)
patch([0.35 0.55 0.55 0.35], [0.4 0.4 0.8 0.8], grey, 'EdgeAlpha', 0)
% Triangle
x = [0.95, 1.0, 1.0];
y = [1.0, 0.95, 1.0];
fill(x, y, 'r')
axis([0 1 0 1])
box on
axis square
% Point
plot(state(1),state(2),'ro','MarkerSize',8,'MarkerFaceColor','r');
f = getframe();
data = f.cdata;
close(fig)
You can reduce the execution time of Matlab's getframe() function by a factor of ten. The trick consists of not creating a figure each time you call the getPixels() function but using an existing one. You may pass the figure handle via the function parameters. And use the Matlab's function clf that clears the current figure window between two calls.
EDIT
Here is an example of the way I play with figure et getframe.
The following performance chart
is given by
%%
clear al
close all
clc
nbSim = 10 %number of getframe calls
tElapsed = zeros(nbSim, 2); %two types of getting frames
%% METHOD 1: figure within loop
for ind_sim = 1:nbSim
fig = figure;
%some graphical elements
hold all
patch(rand(1,4), rand(1,4), rand(1,3), 'EdgeAlpha', 0)
patch(rand(1,4), rand(1,4), rand(1,3), 'EdgeAlpha', 0)
fill(rand(1,3), rand(1,3), 'r')
plot(rand,rand,'ro','MarkerSize',8,'MarkerFaceColor','k');
%some axes properties
axis([0 1 0 1])
box on
axis square
tStart = tic;
f = getframe();
tElapsed(ind_sim,1) = toc(tStart);
data = f.cdata;
close(fig)
end
%% METHOD 2: figure outside loop
fig = figure;
for ind_sim = 1:nbSim
%some graphical elements
hold all
patch(rand(1,4), rand(1,4), rand(1,3), 'EdgeAlpha', 0)
patch(rand(1,4), rand(1,4), rand(1,3), 'EdgeAlpha', 0)
fill(rand(1,3), rand(1,3), 'r')
plot(rand,rand,'ro','MarkerSize',8,'MarkerFaceColor','k');
%some axes properties
axis([0 1 0 1])
box on
axis square
tStart = tic;
f = getframe();
tElapsed(ind_sim,2) = toc(tStart);
data = f.cdata;
clf
end
close(fig)
%% plot results
plot(tElapsed);
set(gca, 'YLim', [0 max(tElapsed(:))+0.1])
xlabel('Number of calls');
ylabel('Execution time');
legend({'within (method 1)';'outside (method 2)'});
title('GetFrame exectution time');
You have to drop the creation of a figure, even if it is declared not visible. This impairs the execution times.
If I have an image, in which there is a page of text shot on a uniform background, how can I auto detect the boundaries between the paper and the background?
An example of the image I want to detect is shown below. The images that I will be dealing with consist of a single page on a uniform background and they can be rotated at any angle.
One simple method would be to threshold the image by some known value once you convert the image to grayscale. The problem with that approach is that we are applying a global threshold and so some of the paper at the bottom of the image will be lost if you make the threshold too high. If you make the threshold too low, then you'll certainly get the paper, but you'll include a lot of the background pixels too and it will probably be difficult to remove those pixels with post-processing.
One thing I can suggest is to use an adaptive threshold algorithm. An algorithm that has worked for me in the past is the Bradley-Roth adaptive thresholding algorithm. You can read up about it here on a post I commented on a while back:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
I've coded an implementation of the Bradley-Roth algorithm here for you. The default parameters for the algorithm are s being 1/8th of the width of the image and t being 15%. Therefore, you can just call it this way to invoke the default parameters:
out = adaptiveThreshold(im);
im is the input image and out is a binary image that denotes what belongs to foreground (logical true) or background (logical false). You can play around with the second and third input parameters: s being the size of the thresholding window and t the percentage we talked about above and can call the function like so:
out = adaptiveThreshold(im, s, t);
Therefore, the code for the algorithm looks like this:
function [out] = adaptiveThreshold(im, s, t)
%// Error checking of the input
%// Default value for s is 1/8th the width of the image
%// Must make sure that this is a whole number
if nargin <= 1, s = round(size(im,2) / 8); end
%// Default value for t is 15
%// t is used to determine whether the current pixel is t% lower than the
%// average in the particular neighbourhood
if nargin <= 2, t = 15; end
%// Too few or too many arguments?
if nargin == 0, error('Too few arguments'); end
if nargin >= 4, error('Too many arguments'); end
%// Convert to grayscale if necessary then cast to double to ensure no
%// saturation
if size(im, 3) == 3
im = double(rgb2gray(im));
elseif size(im, 3) == 1
im = double(im);
else
error('Incompatible image: Must be a colour or grayscale image');
end
%// Compute integral image
intImage = cumsum(cumsum(im, 2), 1);
%// Define grid of points
[rows, cols] = size(im);
[X,Y] = meshgrid(1:cols, 1:rows);
%// Ensure s is even so that we are able to index the image properly
s = s + mod(s,2);
%// Access the four corners of each neighbourhood
x1 = X - s/2; x2 = X + s/2;
y1 = Y - s/2; y2 = Y + s/2;
%// Ensure no co-ordinates are out of bounds
x1(x1 < 1) = 1;
x2(x2 > cols) = cols;
y1(y1 < 1) = 1;
y2(y2 > rows) = rows;
%// Count how many pixels there are in each neighbourhood
count = (x2 - x1) .* (y2 - y1);
%// Compute row and column co-ordinates to access each corner of the
%// neighbourhood for the integral image
f1_x = x2; f1_y = y2;
f2_x = x2; f2_y = y1 - 1; f2_y(f2_y < 1) = 1;
f3_x = x1 - 1; f3_x(f3_x < 1) = 1; f3_y = y2;
f4_x = f3_x; f4_y = f2_y;
%// Compute 1D linear indices for each of the corners
ind_f1 = sub2ind([rows cols], f1_y, f1_x);
ind_f2 = sub2ind([rows cols], f2_y, f2_x);
ind_f3 = sub2ind([rows cols], f3_y, f3_x);
ind_f4 = sub2ind([rows cols], f4_y, f4_x);
%// Calculate the areas for each of the neighbourhoods
sums = intImage(ind_f1) - intImage(ind_f2) - intImage(ind_f3) + ...
intImage(ind_f4);
%// Determine whether the summed area surpasses a threshold
%// Set this output to 0 if it doesn't
locs = (im .* count) <= (sums * (100 - t) / 100);
out = true(size(im));
out(locs) = false;
end
When I use your image and I set s = 500 and t = 5, here's the code and this is the image I get:
im = imread('http://i.stack.imgur.com/MEcaz.jpg');
out = adaptiveThreshold(im, 500, 5);
imshow(out);
You can see that there are some spurious white pixels at the bottom white of the image, and there are some holes we need to fill in inside the paper. As such, let's use some morphology and declare a structuring element that's a 15 x 15 square, perform an opening to remove the noisy pixels, then fill in the holes when we're done:
se = strel('square', 15);
out = imopen(out, se);
out = imfill(out, 'holes');
imshow(out);
This is what I get after all of that:
Not bad eh? Now if you really want to see what the image looks like with the paper segmented, we can use this mask and multiply it with the original image. This way, any pixels that belong to the paper are kept while those that belong to the background go away:
out_colour = bsxfun(#times, im, uint8(out));
imshow(out_colour);
We get this:
You'll have to play around with the parameters until it works for you, but the above parameters were the ones I used to get it working for the particular page you showed us. Image processing is all about trial and error, and putting processing steps in the right sequence until you get something good enough for your purposes.
Happy image filtering!
I am a crystallographer trying to analyse crystals orientations from up to 5000 files. Can Matlab convert angle values in a table that look like this:
Into a table that looks like this?:
Here's a more concrete example based on Lakesh's idea. However, this will handle any amount of rotation. First start off with a base circular image with a strip in the middle. Once you do this, simply run a for loop that stacks all of these rotated images in a grid that resembles the angles seen in your rotation values matrix for every rotation angle that we see in this matrix.
The trick is to figure out how to define the base orientation image. As such, let's define a white square, with a black circle in the middle. We will also define a red strip in the middle. For now, let's assume that the base orientation image is 51 x 51. Therefore, we can do this:
%// Define a grid of points between -25 to 25 for both X and Y
[X,Y] = meshgrid(-25:25,-25:25);
%// Define radius
radius = 22;
%// Generate a black circle that has the above radius
base_image = (X.^2 + Y.^2) <= radius^2;
%// Make into a 3 channel colour image
base_image = ~base_image;
base_image = 255*cast(repmat(base_image, [1 1 3]), 'uint8');
%// Place a strip in the middle of the circle that's red
width_strip = 44;
height_strip = 10;
strip_locs = (X >= -width_strip/2 & X <= width_strip/2 & Y >= -height_strip/2 & Y <= height_strip/2);
base_image(strip_locs) = 255;
With the above, this is what I get:
Now, all you need to do is create a final output image which has as many images as we have rows and columns in your matrix. Given that your rotation matrix values are stored in M, we can use imrotate from the image processing toolbox and specify the 'crop' flag to ensure that the output image is the same size as the original. However, with imrotate, whatever values don't appear in the image after you rotate it, it defaults to 0. You want this to appear white in your example, so we're going to have to do a bit of work. What you'll need to do is create a logical matrix that is the same size as the input image, then rotate it in the same way like you did with the base image. Whatever pixels appear black (which are also false) in this rotated white image, these are the values we need to set to white. As such:
%// Get size of rotation value matrix
[rows,cols] = size(M);
%// For storing the output image
output_image = zeros(rows*51, cols*51, 3);
%// For each value in our rotation value matrix...
for row = 1 : rows
for col = 1 : cols
%// Rotate the image
rotated_image = imrotate(base_image, M(row,col), 'crop');
%// Take a completely white image and rotate this as well.
%// Invert so we know which values were outside of the image
Mrot = ~imrotate(true(size(base_image)), M(row,col), 'crop');
%// Set these values outside of each rotated image to white
rotated_image(Mrot) = 255;
%// Store in the right slot.
output_image((row-1)*51 + 1 : row*51, (col-1)*51 + 1 : col*51, :) = rotated_image;
end
end
Let's try a few angles to be sure this is right:
M = [0 90 180; 35 45 60; 190 270 55];
With the above matrix, this is what I get for my image. This is stored in output_image:
If you want to save this image to file, simply do imwrite(output_image, 'output.png');, where output.png is the name of the file you want to save to your disk. I chose PNG because it's lossless and has a relatively low file size compared to other lossless standards (save JPEG 2000).
Edit to show no line when the angle is 0
If you wish to use the above code where you want to only display a black circle if the angle is around 0, it's just a matter of inserting an if statement inside the for loop as well creating another image that contains a black circle with no strip through it. When the if condition is satisfied, you'd place this new image in the right grid location instead of the black circle with the red strip.
Therefore, using the above code as a baseline do something like this:
%// Define matrix of sample angles
M = [0 90 180; 35 45 60; 190 270 55];
%// Define a grid of points between -25 to 25 for both X and Y
[X,Y] = meshgrid(-25:25,-25:25);
%// Define radius
radius = 22;
%// Generate a black circle that has the above radius
base_image = (X.^2 + Y.^2) <= radius^2;
%// Make into a 3 channel colour image
base_image = ~base_image;
base_image = 255*cast(repmat(base_image, [1 1 3]), 'uint8');
%// NEW - Create a black circle image without the red strip
black_circle = base_image;
%// Place a strip in the middle of the circle that's red
width_strip = 44;
height_strip = 10;
strip_locs = (X >= -width_strip/2 & X <= width_strip/2 & Y >= -height_strip/2 & Y <= height_strip/2);
base_image(strip_locs) = 255;
%// Get size of rotation value matrix
[rows,cols] = size(M);
%// For storing the output image
output_image = zeros(rows*51, cols*51, 3);
%// NEW - define tolerance
tol = 5;
%// For each value in our rotation value matrix...
for row = 1 : rows
for col = 1 : cols
%// NEW - If the angle is around 0, then draw a black circle only
if M(row,col) >= -tol && M(row,col) <= tol
rotated_image = black_circle;
else %// This is the logic if the angle is not around 0
%// Rotate the image
rotated_image = imrotate(base_image, M(row,col), 'crop');
%// Take a completely white image and rotate this as well.
%// Invert so we know which values were outside of the image
Mrot = ~imrotate(true(size(base_image)), M(row,col), 'crop');
%// Set these values outside of each rotated image to white
rotated_image(Mrot) = 255;
end
%// Store in the right slot.
output_image((row-1)*51 + 1 : row*51, (col-1)*51 + 1 : col*51, :) = rotated_image;
end
end
The variable tol in the above code defines a tolerance where anything within -tol <= angle <= tol has the black circle drawn. This is to allow for floating point tolerances when comparing because it's never a good idea to perform equality operations with floating point values directly. Usually it is accepted practice to compare within some tolerance of where you would like to test for equality.
Using the above modified code with the matrix of angles M as seen in the previous example, I get this image now:
Notice that the top left entry of the matrix has an angle of 0, which is thus visualized as a black circle with no strip through it as we expect.
General idea to solve your problem:
1. Store two images, 1 for 0 degrees and 180 degrees and another for 90 and 270 degrees.
2. Read the data from the file
3. if angle = 0 || angle == 180
image = image1
else
image = image2
end
To handle any angle:
1. Store one image. E.g image = imread('yourfile.png')
2. angle = Read the data from the file
3. B = imrotate(image,angle)
I'm trying to add a fade effect to my form by manually changing the opacity of the form but I'm having some trouble calculating the correct value to increment by the Opacity value of the form.
I know I could use the AnimateWindow API but it's showing some unexpected behavior and I'd rather do it manually anyways as to avoid any p/invoke so I could use it in Mono later on.
My application supports speeds ranging from 1 to 10. And I've manually calculated that for a speed of 1 (slowest) I should increment the opacity by 0.005 and for a speed of 10 (fastest) I should increment by 0.1. As for the speeds between 1 and 10, I used the following expression to calculate the correct value:
double opSpeed = (((0.1 - 0.005) * (10 - X)) / (1 - 10)) + 0.1; // X = [1, 10]
I though this could give me a linear value and that that would be OK. However, for X equal 4 and above, it's already too fast. More than it should be. I mean, speeds between 7, and 10, I barely see a difference and the animation speed with these values should be a little more spaced
Note that I still want the fastest increment to be 0.1 and the slowest 0.005. But I need all the others to be linear between them.
What I'm doing wrong?
It actually makes sense why it works like this, for instance, for a fixed interval between increments, say a few milliseconds, and with the equation above, if X = 10, then opSpeed = 0.1 and if X = 5, then opSpeed = 0.47. If we think about this, a value of 0.1 will loop 10 times and a value of 0.47 will loop just the double. For such a small interval of just a few milliseconds, the difference between these values is not that much as to differentiate speeds from 5 to 10.
I think what you want is:
0.005 + ((0.1-0.005)/9)*(X-1)
for X ranging from 1-10
This gives a linear scale corresponding to 0.005 when X = 1 and 0.1 when X = 10
After the comments below, I'm also including my answer fit for a geometric series instead of a linear scale.
0.005 * (20^((X-1)/9)))
Results in a geometric variation corresponding to 0.005 when X = 1 and 0.1 when X = 10
After much more discussion, as seen in the comments below, the updates are as follows.
#Nazgulled found the following relation between my geometric series and the manual values he actually needed to ensure smooth fade animation.
The relationship was as follows:
Which means a geometric/exponential series is the way to go.
After my hours of trying to come up with the appropriate curve fitting to the right hand side graph and derive a proper equation, #Nazgulled informed me that Wolfram|Alpha does that. Seriously amazing. :)
Wolfram Alpha link
He should have what he wants now, barring very high error from the equation above.
Your problem stems from the fact that the human eye is not linear in its response; to be precise, the eye does not register the difference between a luminosity of 0.05 to 0.10 to be the same as the luminosity difference between 0.80 and 0.85. The whole topic is complicated; you may want to search for the phrase "gamma correction" for some additional information. In general, you'll probably want to find an equation which effectively "gamma corrects" for human ocular response, and use that as your fading function.
It's not really an answer, but I'll just point out that everyone who's posted so far, including the original question, are all posting the same equation. So with four independent derivations, maybe we should assume that the equation was probably correct.
I did the algebra, but here's the code to verify (in Python, btw, with offsets added to separate the curves:
from pylab import *
X = arange(1, 10, .1)
opSpeed0 = (((0.1 - 0.005) * (10 - X)) / (1 - 10)) + 0.1 # original
opSpeed1 = 0.005 + ((0.1-0.005)/9)*(X-1) # Suvesh
opSpeed2 = 0.005*((10-X)/9.) + 0.1*(X-1)/9. # duffymo
a = (0.1 - 0.005) / 9 #= 0.010555555555... # Roger
b = 0.005 - a #= -0.00555555555...
opSpeed3 = a*X+b
nonlinear01 = 0.005*2**((2*(-1 + X))/9.)*5**((-1 + X)/9.)
plot(X, opSpeed0)
plot(X, opSpeed1+.001)
plot(X, opSpeed2+.002)
plot(X, opSpeed3+.003)
plot(X, nonlinear01)
show()
Also, at Nazgulled's request, I've included the non-linear curve suggested by Suvesh (which also, btw, looks quite alot like a gamma correction curve, as suggested by McWafflestix). The Suvesh's nonlinear equation is in the code as nonlinear01.
Here's how I'd program that linear relationship. But first I'd like to make clear what I think you're doing.
You want the rate of change in opacity to be a linear function of speed:
o(v) = o1*N1(v) + o2*N2(v) so that 0 <= v <=1 and o(v1) = o1 and o(v2) = o2.
If we choose N1(v) to equal 1-v and N2(v) = v we end up with what you want:
o(v) = o1*(1-v) + o2*v
So, plugging in your values:
v = (u-1)/(10-1) = (u-1)/9
o1 = 0.005 and o2 = 0.1
So the function should look like this:
o(u) = 0.005*{1-(u-1)/9} + 0.1*(u-1)/9
o(u) = 0.005*{(9-u+1)/9} + 0.1*(u-1)/9
o(u) = 0.005*{(10-u)/9} + 0.1(u-1)/9
You can simplify this until you get a simple formula for o(u) where 1 <= u <= 10. Should work fine.
If I understand what you're after, you want the equation of a line which passes through these two points in the plane: (1, 0.005) and (10, 0.1). The general equation for such a line (as long as it is not vertical) is y = ax+b. Plug the two points into this equation and solve the resulting set of two linear equations to get
a = (0.1 - 0.005) / 9 = 0.010555555555...
b = 0.005 - a = -0.00555555555...
Then, for each integer x = 1, 2, 3, ..., 10, plug x into y = ax+b to compute y, the value you want.