I tried searching, but due to the nature of my question, I was unable to find something satisfactory.
My problem is the following: I am trying to map numbers ranging from 0 to 2000 (though ideally the upper limit would be adjustable) to the much smaller interval ranging from 10 to 100. The upper limits would map (2000->100) and the lower limits as well. Other than that, an entry that is bigger than another entry in the interval [0;2000] would ideally be bigger than that mapped entry in [0;100]
I'm thinking that this question is not language specific, but in case you are wondering, I'm working with Javascript today.
To map
[A, B] --> [a, b]
use this formula
(val - A)*(b-a)/(B-A) + a
as correctly mentioned in the other answer it's linear mapping.
Basically
y = m*x + c
c = intersection at y-axis
m = slope determined by two known point (A, a), (B, b) = (b-a)/(B-A)
I think that instead of giving you a formula of direct mapping, a better approach would be to explain the idea behind it:
Suppose we want to map an interval [0,1] to interval [1,3], which can be seen as the problem of finding f(x) = Ax + B such that giving any x from interval [0,1], will result in f(x) being/resulting in interval [1,3].
From this perspective, we already know some values:
x = 0 & f(0) = 1 => f(0) = A*0 + B = 1 => B = 1
x = 1 & f(1) = 3 => f(1) = A*1 + B = 3 <=> A + 1 = 3 => A=2
From (1) and (2), we may conclude that the function that maps interval [0,1] to [1,3] is f(x) = 2x + 1.
In you very case, you now should have all necessary knowledge to be able to map [0,2000] interval to [10,100].
// Given a value from intervalA, returns a mapped value from intervalB.
function intervalicValueMap(intervalA, intervalB, valueIntervalA) {
var valueIntervalB = (valueIntervalA - intervalA[0]) * (intervalB[1] - intervalB[0])
/ (intervalA[1] - intervalA[0]) + intervalB[0];
valueIntervalB = Math.round(valueIntervalB); // Ommit rounding if not needed.
return valueIntervalB;
}
var intervalA = [100, 200];
var intervalB = [1, 10];
var valueIntervalA = 170;
var valueIntervalB = intervalicValueMap(intervalA, intervalB, valueIntervalA);
console.log(valueIntervalB); // Logs 7
A simple linear mapping would map x to x*90/2000+10.
Here could be an optimized way to map your x data,
This pseudo code shows you the main idea for a map function
that:
Avoid problems with x values out of b1 - b2's range.
Deals with array mapping
function map(var x, var b1, var b2, var s1, var s2)
{
var i;
var result;
i = 0;
while(i < sizeof(s2))
if(x < b1)
result[i++] = s1;
else if (x > b2)
result[i++] = s2;
else
result[i] = (x - b1) / (b2 - b1 ) * (s2[i] - s1[i]) + s1[i++];
return (result);
}
An answer using Numpy from Python:
import numpy as np
# [A, B]: old interval
# [a, b] new interval
new_value = np.interp(old_value, [A, B], [a, b])
print(new_value)
Related
So I'm trying to make sense of a scenario in my class exercise which is to find the max and min value of a function. I have two vectors, w and v, of weights which are to sum to 1. The vectors are w = [0.6, 0.2, 0.2]^T v = [0.8, -0.2, 0.4]^T
These vectors form a linear combination of weights M = Aw + Bv, and A and B must sum to 1.
The function we are then optimizing is r = [0.1, 0.2, 0.1] • M
The constraints are as follows: 0 ≤ (0.6A + 0.8B) <= 1 , 0 ≤ (0.2A - 0.2B) <= 1 , 0 ≤ (0.2A + 0.4B) <= 1
The answer we should get are A = B = .5 for the minimum value of r which is 0.1. For the maximum we should get A = 2, B = -1 with r = 0.16. But the values I'm getting for the max are A = 3.5714286, B = -1.4285714, and for the min I'm getting A = B = 0.
Below is the code.
import pulp as p
from pulp import *
problem = LpProblem('Car Factory', LpMaximize)
A = LpVariable('Amound of w', cat=LpContinuous)
B = LpVariable('Amount of v', cat=LpContinuous)
#Objective Function
problem += (0.1)*(0.6*A + 0.8*B) + (0.2)*(0.2*A - 0.2*B) + (0.1)*(0.2*A + 0.4*B) , 'Objective Function'
#Constraints
problem += (0.6*A + 0.8*B) <= 1 , 'A'
problem += (0.6*A + 0.8*B) >= 0 , 'AL'
problem += (0.2*A - 0.2*B) <= 1, 'B'
problem += (0.2*A - 0.2*B) >= 0, 'BL'
problem += (0.2*A + 0.4*B) <= 1, 'C'
problem += (0.2*A + 0.4*B) >= 0, 'CL'
problem.solve()
print("Amount of w: ", A.varValue)
print("Amount of v: ", B.varValue)
print("total: ", value(problem.objective))
I'm sure it has to do with the set up which I'm just not seeing. And also is there a more efficient way to put this together?
I think you are missing a constraint, which would explain your deviation from the expected result. Where is your constraint that:
A + B == 1
Also, you are importing pulp twice, which may cause some confusion in the namespace of your code. Do one or the other, not both.
On expressing the problem more efficiently...? Nahh. You could treat your two column vectors as arrays of length 3 and do the math in your objective a bit differently, but it probably isn't worth it and your variables are just scalars, so I'd write it as you did. Now if the vectors were much larger, or if the variables were vectors, sure, I'd do something else.
pulp doesn't naturally handle vectors (like numpy arrays) to my knowledge. If you are going to be doing a lot of optimization in vector-matrix format and you are comfortable with the linear algebra, you might look at cvxpy which handles them naturally. If you're in a class that uses pulp, it's just fine to learn the basics.
I have 2 functions:
ccexpan - which calculates coefficients of interpolating polynomial of function f with N nodes in Chebyshew polynomial of the first kind basis.
csum - calculates value for arguments t using coefficients c from ccexpan (using Clenshaw algorithm).
This is what I have written so far:
function c = ccexpan(f,N)
z = zeros (1,N+1);
s = zeros (1,N+1);
for i = 1:(N+1)
z(i) = pi*(i-1)/N;
end
t = f(cos(z));
for k = 1:(N+1)
s(k) = sum(t.*cos(z.*(k-1)));
s(k) = s(k)-(f(1)+f(-1)*cos(pi*(k-1)))/2;
end
c = s.*2/N;
and:
function y = csum(t,c)
M = length(t);
N = length(c);
y = t;
b = zeros(1,N+2);
for k = 1:M
for i = N:-1:1
b(i) = c(i)+2*t(k)*b(i+1)-b(i+2);
end
y(k)=(b(1)-b(3))/2;
end
Unfortunately these programs are very slow, and also slightly inacurrate. Please give me some tips on how to speed them up, and how to improve accuracy.
Where possible try to get away from looping structures. At first blush, I would trade out your first for loop of
for i = 1:(N+1)
z(i) = pi*(i-1)/N;
end
and replace with
i=1:(N+1)
z = pi*(i-1)/N
I did not check the rest of you code but the above example will definitely speed up you code. And a second strategy is to combine loops when possible.
Martin,
Consider the following strategy.
% create hypothetical N and f
N = 3
f = #(x) 1./(1+15*x.*x)
% calculate z and t
i=1:(N+1)
z = pi*(i-1)/N
t = f(cos(z))
% make a column vector of k's
k = (1:(N+1))'
% do this: s(k) = sum(t.*cos(z.*(k-1)))
s1 = t.*cos(z.*(k-1)) % should be a matrix with one row for each row of k
% via implicit expansion
s2 = sum(s1,2) % row sum, i.e., one value for each row of k
% do this: s(k) = s(k)-(f(1)+f(-1)*cos(pi*(k-1)))/2
s3 = s2 - (f(1)+f(-1)*cos(pi*(k-1)))/2
% calculate c
c = s3 .* 2/N
Suppose I have a function phi(x1,x2)=k1*x1+k2*x2 which I have evaluated over a grid where the grid is a square having boundaries at -100 and 100 in both x1 and x2 axis with some step size say h=0.1. Now I want to calculate this sum over the grid with which I'm struggling:
What I was trying :
clear all
close all
clc
D=1; h=0.1;
D1 = -100;
D2 = 100;
X = D1 : h : D2;
Y = D1 : h : D2;
[x1, x2] = meshgrid(X, Y);
k1=2;k2=2;
phi = k1.*x1 + k2.*x2;
figure(1)
surf(X,Y,phi)
m1=-500:500;
m2=-500:500;
[M1,M2,X1,X2]=ndgrid(m1,m2,X,Y)
sys=#(m1,m2,X,Y) (k1*h*m1+k2*h*m2).*exp((-([X Y]-h*[m1 m2]).^2)./(h^2*D))
sum1=sum(sys(M1,M2,X1,X2))
Matlab says error in ndgrid, any idea how I should code this?
MATLAB shows:
Error using repmat
Requested 10001x1001x2001x2001 (298649.5GB) array exceeds maximum array size preference. Creation of arrays greater
than this limit may take a long time and cause MATLAB to become unresponsive. See array size limit or preference
panel for more information.
Error in ndgrid (line 72)
varargout{i} = repmat(x,s);
Error in new_try1 (line 16)
[M1,M2,X1,X2]=ndgrid(m1,m2,X,Y)
Judging by your comments and your code, it appears as though you don't fully understand what the equation is asking you to compute.
To obtain the value M(x1,x2) at some given (x1,x2), you have to compute that sum over Z2. Of course, using a numerical toolbox such as MATLAB, you could only ever hope to compute over some finite range of Z2. In this case, since (x1,x2) covers the range [-100,100] x [-100,100], and h=0.1, it follows that mh covers the range [-1000, 1000] x [-1000, 1000]. Example: m = (-1000, -1000) gives you mh = (-100, -100), which is the bottom-left corner of your domain. So really, phi(mh) is just phi(x1,x2) evaluated on all of your discretised points.
As an aside, since you need to compute |x-hm|^2, you can treat x = x1 + i x2 as a complex number to make use of MATLAB's abs function. If you were strictly working with vectors, you would have to use norm, which is OK too, but a bit more verbose. Thus, for some given x=(x10, x20), you would compute x-hm over the entire discretised plane as (x10 - x1) + i (x20 - x2).
Finally, you can compute 1 term of M at a time:
D=1; h=0.1;
D1 = -100;
D2 = 100;
X = (D1 : h : D2); % X is in rows (dim 2)
Y = (D1 : h : D2)'; % Y is in columns (dim 1)
k1=2;k2=2;
phi = k1*X + k2*Y;
M = zeros(length(Y), length(X));
for j = 1:length(X)
for i = 1:length(Y)
% treat (x - hm) as a complex number
x_hm = (X(j)-X) + 1i*(Y(i)-Y); % this computes x-hm for all m
M(i,j) = 1/(pi*D) * sum(sum(phi .* exp(-abs(x_hm).^2/(h^2*D)), 1), 2);
end
end
By the way, this computation takes quite a long time. You can consider either increasing h, reducing D1 and D2, or changing all three of them.
Edited to clarify the application by adding units (ml) and explaining the difficulty to measure wet reagents by units of 1/26. The word 'solution' was ambiguous because it was used to mean both a chemical solution as well as the solution to the problem.
Added results based on Edward's reply
The real world application is that I am trying to determine the closest "convenient" volumes to use when mixing reagents A and B to create a solution (in the wet chemistry sense) that best approximates a specific A:B ratio. Let's define "convenient" as divisible by 5.
Example
Given:
1. X = A/(A+B) * C
2. Y = B/(A+B) * C
3. X + Y = C
4. A, B, C always positive integer
// e.g. a 500ml solution (wet chemistry sense) C with a 1:25 ratio of A and B
A = 1
B = 25
C = 500
This gives the volumes to use of X and Y to create the solution (wet chemistry sense) with the proper A:B ratio.
X = 500/26 = ~19.23ml
Y = 12500/26 = ~480.77ml
C = 13000/26 = 500ml
These are the exact volumes create a total volume of 500ml, but trying to measure reagent volumes in units of 1/26ml is a challenge.
How to find "convenient values" (integer divisible by 5) for X, Y, and C that best approximate the exact values of X, Y, and C that would be multiples of 1/26? In this case I found as the closest "convenient" values for X, Y, C:
X = 20ml
Y = 500ml
C = 520ml
C in this case (520ml) is more than the required volume of 500ml, but it is more practical to physically measure the volumes of 20mL and 500mL than it would be to measure reagent volumes in 1/26ths. The extra 20mL is discarded, the cost for using nice values.
RESULTS BASED ON EDWARD'S ANSWER
A=1 B=25 C=500
X=20 Y=500 C2=520
A=1 B=20 C=500
X=25 Y=500 C2=525
A=1 B=100 C=500
X=5 Y=500 C2=505
A=1 B=75 C=500
X=10 Y=750 C2=760
A=1 B=50 C=900
X=20 Y=1000 C2=1020
One way to approach this would be to adjust C so that it absorbs the factor A+B. Then the ratio of A to B would be exact, and X, Y, and C would all be integers. Let D = 5*(A+B), C2 = ceiling(C/((double)D)) * D (round up so you get enough C), X = C2/(A+B)*A, Y = C2/(A+B)*B. If you want the closest value of C, use C2 = round(C/((double)D))*D instead.
If you're mixing chemicals, you probably want to round up rather than round to closest so you'll have enough with a little waste left over, which is better than not having enough.
You can phrase this as an optimization problem with an L1 (absolute value) objective function. (This is using a cannon to swat a mosquito, but I did it because I wanted to figure out about the L1 optimization.) I used the program glpsol from the GLPK package (open source). Here is my program:
param A, integer, >= 0;
param B, integer, >= 0;
param C, integer, >= 0;
var x, integer, >= 0;
var y, integer, >= 0;
var e1x, >= 0;
var e1y, >= 0;
minimize e1 : e1x + e1y;
subject to
c1 : (5*x - (C*A)/(A + B)) <= e1x;
c2 : ((C*A)/(A + B) - 5*x) <= e1x;
c3 : (5*y - (C*B)/(A + B)) <= e1y;
c4 : ((C*B)/(A + B) - 5*y) <= e1y;
solve;
printf "x=%g, y=%g, error=%g\n", x, y, e1;
data;
param A := 1;
param B := 25;
param C := 500;
Here is the output:
$ glpsol --model find_nice_integers.mod
[... snip ...]
x=4, y=96, error=1.53846
Here are some notes about how to handle absolute values in optimization problems.
So, you are given an integer number C and the ratio p:q between two other integer numbers A and B (i.e., A/B = p/q).
I will interpret your definition of convenient as requiring that X and Y are both multiple of 5 where
X = A / (A+B) * C'
Y = B / (A+B) * C'
C' is close to C
Replacing A/B with p/q we get
X = p / (p+q) * C'
Y = q / (p+q) * C'
Now, in order for X and Y to be integer both p * C' and q * C' must both be multiples of (p+q). And since we can assume that p:q is irreductible (i.e., p and q have no multiples in common) this means that C' must be divisible by p+q. In addition, C'/(p+q) must be multiple of 5. So, C' must be a multiple of 5*(p+q).
The multiple of 5*(p+q) that is closest to C is:
C' := round(C/(5*(p+q)))*5*(p+q)
Now we can calculate:
X := p/(p+q)*C'
Y := q/(p+q)*C'
and they are indeed multiple of 5 because C'/(p+q) is.
Let's see how this behaves with your example:
Inputs:
p = 1
q = 25
C = 500
Then
C' := round(500/5(1+25))*5*(1+25) = round(100/26)*5*26 = 4*5*26 = 520
Hence
X := p/(p+q)*C' = 1/(1+25)*4*5*26 = 1/26*4*5*26 = 4*5 = 20
Y := q/(p+q)*C' = 25/(1+25)*4*5*26 = 25/26*4*5*26 = 25*4*5 = 500.
Voila!
Let's first calculate optimal(float) A and B.
It could be Observed that optimal integer solutions are either {floor(A), ceiling(B)} or {ceiling(A), floor(B)}. So we simply try both and chose the answer with less error.
Following is text from Data structure and algorithm analysis by Mark Allen Wessis.
Following x(i+1) should be read as x subscript of i+1, and x(i) should be
read as x subscript i.
x(i + 1) = (a*x(i))mod m.
It is also common to return a random real number in the open interval
(0, 1) (0 and 1 are not possible values); this can be done by
dividing by m. From this, a random number in any closed interval [a,
b] can be computed by normalizing.
The problem with this routine is that the multiplication could
overflow; although this is not an error, it affects the result and
thus the pseudo-randomness. Schrage gave a procedure in which all of
the calculations can be done on a 32-bit machine without overflow. We
compute the quotient and remainder of m/a and define these as q and
r, respectively.
In our case for M=2,147,483,647 A =48,271, q = 127,773, r = 2,836, and r < q.
We have
x(i + 1) = (a*x(i))mod m.---------------------------> Eq 1.
= ax(i) - m (floorof(ax(i)/m)).------------> Eq 2
Also author is mentioning about:
x(i) = q(floor of(x(i)/q)) + (x(i) mod Q).--->Eq 3
My question
what does author mean by random number is computed by normalizing?
How author came with Eq 2 from Eq 1?
How author came with Eq 3?
Normalizing means if you have X ∈ [0,1] and you need to get Y ∈ [a, b] you can compute
Y = a + X * (b - a)
EDIT:
2. Let's suppose
a = 3, x = 5, m = 9
Then we have
where [ax/m] means an integer part.
So we have 15 = [ax/m]*m + 6
We need to get 6. 15 - [ax/m]*m = 6 => ax - [ax/m]*m = 6 => x(i+1) = ax(i) - [ax(i)/m]*m
If you have a random number in the range [0,1], you can get a number in the range [2,5] (for example) by multiplying by 3 and adding 2.