Related
this is my first question on the forum and my algebra is rusty so please be indulgent ^^'
So my problem is that i want to predict collision between two uniform circular motion objects for which i know velocity (angular speed in radian), distance from the origin (radius), cartesian coordinate of the center of the circle.
I can get cartesian position for each object given for t time (timestamp) using :
Oa.x = ra X cos(wa X t)
Oa.y = ra X sin(wa X t)
Oa.x: Object A x coordinates
ra: radius of a Circle A
wa: velocity of object A (angular speed in radian)
t: time (timestamp)
Same goes for object b (Ob)
I want to find t such that ||Ca - Cb|| = (rOa + rOb)
rOa: radius of object a
Squaring both side and expanding give me this :
||Ca-Cb||^2 = (rOa+rOb)^2
(ra * cos (wa * t) - rb / cos (wb * t))^2 + (ra * sin (wa * t) - rb / sin (wb * t))^2 = (ra+rb)^2
From that i should get a quadratic polynomial that i can solve for t, but how can i find a condition that tell me if such a t exist ? And possibly, how to solve it for t ?
Your motion equations are missing some stuff I expect this instead:
a0(t) = omg0*t + ang0
x0(t) = cx0 + R0 * cos(a0(t))
y0(t) = cy0 + R0 * sin(a0(t))
a1(t) = omg1*t + ang1
x1(t) = cx1 + R1 * cos(a1(t))
y1(t) = cy1 + R1 * sin(a1(t))
where t is time in [sec], cx?,cy? is the center of rotation ang? is starting angle (t=0) in [rad] and omg? is angular speed in [rad/sec]. If the objects have radius r? then collision occurs when the distance is <= r0+r1
so You want to find smallest time where:
(x1-x0)^2 + (y1-y0)^2 <= (r0+r1)^2
This will most likely lead to transcendent equation so you need numeric approach to solve this. For stuff like this I usually use Approximation search so to solve this do:
loop t from 0 to some reasonable time limit
The collision will happen with constant frequency and the time between collisions will be divisible by periods of both motions so I would test up to lcm(2*PI/omg0,2*PI/omg1) time limit where lcm is least common multiple
Do not loop t through all possible times with brute force but use heuristic (like the approx search linked above) beware initial time step must be reasonable I would try dt = min(0.2*PI/omg0,0.2*PI/omg1) so you have at least 10 points along circle
solve t so the distance between objects is minimal
This however will find the time when the objects collide fully so their centers merge. So you need to substract some constant time (or search it again) that will get you to the start of collision. This time you can use even binary search as the distance will be monotonic.
next collision will appear after lcm(2*PI/omg0,2*PI/omg1)
so if you found first collision time tc0 then
tc(i) = tc0 + i*lcm(2*PI/omg0,2*PI/omg1)
i = 0,1,2,3,...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I'm working with SVR, and using this resource. Erverything is super clear, with epsilon intensive loss function (from figure). Prediction comes with tube, to cover most training sample, and generalize bounds, using support vectors.
Then we have this explanation. This can be described by introducing (non-negative) slack variables , to measure the deviation of training samples outside -insensitive zone. I understand this error, outside tube, but don't know, how we can use this in optimization. Could somebody explain this?
In local source. I'm trying to achieve very simple optimization solution, without libraries. This what I have for loss function.
import numpy as np
# Kernel func, linear by default
def hypothesis(x, weight, k=None):
k = k if k else lambda z : z
k_x = np.vectorize(k)(x)
return np.dot(k_x, np.transpose(weight))
.......
import math
def boundary_loss(x, y, weight, epsilon):
prediction = hypothesis(x, weight)
scatter = np.absolute(
np.transpose(y) - prediction)
bound = lambda z: z \
if z >= epsilon else 0
return np.sum(np.vectorize(bound)(scatter))
First, let's look at the objective function. The first term, 1/2 * w^2 (wish this site had LaTeX support but this will suffice) correlates with the margin of the SVM. The article you linked doesn't, in my opinion, explain this very well and calls this term describing "the model's complexity", but perhaps this is not the best way of explaining it. Minimizing this term maximizes the margin (while still representing the data well), which is the predominant goal of using SVM's doing regression.
Warning, Math Heavy Explanation: The reason this is the case is that when maximizing the margin, you want to find the "farthest" non-outlier points right on the margin and minimize its distance. Let this farthest point be x_n. We want to find its Euclidean distance d from the plane f(w, x) = 0, which I will rewrite as w^T * x + b = 0 (where w^T is just the transpose of the weights matrix so that we can multiply the two). To find the distance, let us first normalize the plane such that |w^T * x_n + b| = epsilon, which we can do WLOG as w is still able to form all possible planes of the form w^T * x + b= 0. Then, let's note that w is perpendicular to the plane. This is obvious if you have dealt a lot with planes (particularly in vector calculus), but can be proven by choosing two points on the plane x_1 and x_2, then noticing that w^T * x_1 + b = 0, and w^T * x_2 + b = 0. Subtracting the two equations we get w^T(x_1 - x_2) = 0. Since x_1 - x_2 is just any vector strictly on the plane, and its dot product with w is 0, then we know that w is perpendicular to the plane. Finally, to actually calculate the distance between x_n and the plane, we take the vector formed by x_n' and some point on the plane x' (The vectors would then be x_n - x', and projecting it onto the vector w. Doing this, we get d = |w * (x_n - x') / |w||, which we can rewrite as d = (1 / |w|) * | w^T * x_n - w^T x'|, and then add and subtract b to the inside to get d = (1 / |w|) * | w^T * x_n + b - w^T * x' - b|. Notice that w^T * x_n + b is epsilon (from our normalization above), and that w^T * x' + b is 0, as this is just a point on our plane. Thus, d = epsilon / |w|. Notice that maximizing this distance subject to our constraint of finding the x_n and having |w^T * x_n + b| = epsilon is a difficult optimization problem. What we can do is restructure this optimization problem as minimizing 1/2 * w^T * w subject to the first two constraints in the picture you attached, that is, |y_i - f(x_i, w)| <= epsilon. You may think that I have forgotten the slack variables, and this is true, but when just focusing on this term and ignoring the second term, we ignore the slack variables for now, I will bring them back later. The reason these two optimizations are equivalent is not obvious, but the underlying reason lies in discrimination boundaries, which you are free to read more about (it's a lot more math that frankly I don't think this answer needs more of). Then, note that minimizing 1/2 * w^T * w is the same as minimizing 1/2 * |w|^2, which is the desired result we were hoping for. End of the Heavy Math
Now, notice that we want to make the margin big, but not so big that includes noisy outliers like the one in the picture you provided.
Thus, we introduce a second term. To motivate the margin down to a reasonable size the slack variables are introduced, (I will call them p and p* because I don't want to type out "psi" every time). These slack variables will ignore everything in the margin, i.e. those are the points that do not harm the objective and the ones that are "correct" in terms of their regression status. However, the points outside the margin are outliers, they do not reflect well on the regression, so we penalize them simply for existing. The slack error function that is given there is relatively easy to understand, it just adds up the slack error of every point (p_i + p*_i) for i = 1,...,N, and then multiplies by a modulating constant C which determines the relative importance of the two terms. A low value of C means that we are okay with having outliers, so the margin will be thinned and more outliers will be produced. A high value of C indicates that we care a lot about not having slack, so the margin will be made bigger to accommodate these outliers at the expense of representing the overall data less well.
A few things to note about p and p*. First, note that they are both always >= 0. The constraint in your picture shows this, but it also intuitively makes sense as slack should always add to the error, so it is positive. Second, notice that if p > 0, then p* = 0 and vice versa as an outlier can only be on one side of the margin. Last, all points inside the margin will have p and p* be 0, since they are fine where they are and thus do not contribute to the loss.
Notice that with the introduction of the slack variables, if you have any outliers then you won't want the condition from the first term, that is, |w^T * x_n + b| = epsilon as the x_n would be this outlier, and your whole model would be screwed up. What we allow for, then, is to change the constraint to be |w^T * x_n + b| = epsilon + (p + p*). When translated to the new optimization's constraint, we get the full constraint from the picture you attached, that is, |y_i - f(x_i, w)| <= epsilon + p + p*. (I combined the two equations into one here, but you could rewrite them as the picture is and that would be the same thing).
Hopefully after covering all this up, the motivation for the objective function and the corresponding slack variables makes sense to you.
If I understand the question correctly, you also want code to calculate this objective/loss function, which I think isn't too bad. I have not tested this (yet), but I think this should be what you want.
# Function for calculating the error/loss for a SVM. I assume that:
# - 'x' is 2d array representing the vectors of the data points
# - 'y' is an array representing the values each vector actually gives
# - 'weights' is an array of weights that we tune for the regression
# - 'epsilon' is a scalar representing the breadth of our margin.
def optimization_objective(x, y, weights, epsilon):
# Calculates first term of objective (note that norm^2 = dot product)
margin_term = np.dot(weight, weight) / 2
# Now calculate second term of objective. First get the sum of slacks.
slack_sum = 0
for i in range(len(x)): # For each observation
# First find the absolute distance between expected and observed.
diff = abs(hypothesis(x[i]) - y[i])
# Now subtract epsilon
diff -= epsilon
# If diff is still more than 0, then it is an 'outlier' and will have slack.
slack = max(0, diff)
# Add it to the slack sum
slack_sum += slack
# Now we have the slack_sum, so then multiply by C (I picked this as 1 aribtrarily)
C = 1
slack_term = C * slack_sum
# Now, simply return the sum of the two terms, and we are done.
return margin_term + slack_term
I got this function working on my computer with small data, and you may have to change it a little to work with your data if, for example, the arrays are structured differently, but the idea is there. Also, I am not the most proficient with python, so this may not be the most efficient implementation, but my intent was to make it understandable.
Now, note that this just calculates the error/loss (whatever you want to call it). To actually minimize it requires going into Lagrangians and intense quadratic programming which is a much more daunting task. There are libraries available for doing this but if you want to do this library free as you are doing with this, I wish you good luck because doing that is not a walk in the park.
Finally, I would like to note that most of this information I got from notes I took in my ML class I took last year, and the professor (Dr. Abu-Mostafa) was a great help to have me learn the material. The lectures for this class are online (by the same prof), and the pertinent ones for this topic are here and here (although in my very biased opinion you should watch all the lectures, they were a great help). Leave a comment/question if you need anything cleared up or if you think I made a mistake somewhere. If you still don't understand, I can try to edit my answer to make more sense. Hope this helps!
currently I'm needing a function which gives a weighted, random number.
It should chose a random number between two doubles/integers (for example 4 and 8) while the value in the middle (6) will occur on average, about twice as often than the limiter values 4 and 8.
If this were only about integers, I could predefine the values with variables and custom probabilities, but I need the function to give a double with at least 2 digits (meaning thousands of different numbers)!
The environment I use, is the "Game Maker" which provides all sorts of basic random-generators, but not weighted ones.
Could anyone possibly lead my in the right direction how to achieve this?
Thanks in advance!
The sum of two independent continuous uniform(0,1)'s, U1 and U2, has a continuous symmetrical triangle distribution between 0 and 2. The distribution has its peak at 1 and tapers to zero at either end. We can easily translate that to a range of (4,8) via scaling by 2 and adding 4, i.e., 4 + 2*(U1 + U2).
However, you don't want a height of zero at the endpoints, you want half the peak's height. In other words, you want a triangle sitting on a rectangular base (i.e., uniform), with height h at the endpoints and height 2h in the middle. That makes life easy, because the triangle must have a peak of height h above the rectangular base, and a triangle with height h has half the area of a rectangle with the same base and height h. It follows that 2/3 of your probability is in the base, 1/3 is in the triangle.
Combining the elements above leads to the following pseudocode algorithm. If rnd() is a function call that returns continuous uniform(0,1) random numbers:
define makeValue()
if rnd() <= 2/3 # Caution, may want to use 2.0/3.0 for many languages
return 4 + (4 * rnd())
else
return 4 + (2 * (rnd() + rnd()))
I cranked out a million values using that and plotted a histogram:
For the case someone needs this in Game Maker (or a different language ) as an universal function:
if random(1) <= argument0
return argument1 + ((argument2-argument1) * random(1))
else
return argument1 + (((argument2-argument1)/2) * (random(1) + random(1)))
Called as follows (similar to the standard random_range function):
val = weight_random_range(FACTOR, FROM, TO)
"FACTOR" determines how much of the whole probability figure is the "base" for constant probability. E.g. 2/3 for the figure above.
0 will provide a perfect triangle and 1 a rectangle (no weightning).
This problem is based on a puzzle by Joel Spolsky from 2001.
A guy "gets a job as a street painter, painting the dotted lines down the middle of the road." On the first day he finishes up 300 yards, on the second - 150, and on the 3rd even less so. The boss is furious and demands an explanation.
"I can't help it," says the guy. "Every day I get farther and farther away from the paint can!"
My question is, can you estimate the distance he covered in the 3rd day?
One of the comments in the linked thread does derive a precise solution, but my question is about a good enough estimation -- say, 10% -- that is easy to make from the general principles.
clarification: this is about a certain method in analysis of algorithms, not about developing an algorithm, nor code.
There are a lot of unknowns here - his walking speed, his painting speed, for how long does the paint in the brush last...
But clearly there are two processes going on here. One is quadratic - it's the walking to and fro between the paint can and the painting point. The other is linear - it's the process of painting, itself.
Thinking about the 10th or even the 100th day, it is clear that the linear component becomes negligible, and the process becomes very nearly quadratic - the walking takes almost all the time. During the first few minutes of the first day, on the contrary, it is close to being linear.
We can thus say that the time t as a function of the distance s follows a power law t ~ s^a with a changing coefficient a = 1.0 ... 2.0. This also means that s ~ t^b, b = 1/a.
Applying the empirical orders of growth analysis:
The b coefficient between day 1 and day 2 is approximated as
b(1,2) = log (450/300) / log 2 = 0.585 ;; and so,
a(1,2) = 1/b(1,2) = 1/0.585 = 1.71
Just as expected, the a coefficient is below 2. Going for the time period between day 2 and day 3, we can set it approximately to the middle value between 1.71 and 2.0,
a(2,3) = 1.85 ;; a = 1.0 .... 2.0
b(2,3) = 0.54 ;; b = 1.0 .... 0.5
s(3) = s(2) * (3/2)^b(2,3)
= 450 * (3/2)^0.54
= 560 yards
Thus the distance covered in the third day can be estimated as 560 - 450 = 110 yards.
What if the a coefficient had the maximum possible value, 2.0, already (which is impossible)? Then, 450*(3/2)^0.5 = 551 yards. And for the other extreme, if it were the same 1.71 (which it clearly can't be, either), 450*(3/2)^0.585 = 570.
This means that the estimate of 110 yards is plausible, with an error of less than 10 yards on either side.
considering four assumptions :-
painting speed = infinity
walking speed = x
he can paint only infinitly small in one brush stroke.
he leaves his can at starting point.
The distance he walks for painting dy road at y distance = 2y
Total distance he walks = intgeration of 2y*dy = y^2 = y^2
Total time he can paint y distance = y^2/x
Time taken to paint 300 yards = 1 day
(300)^2/x = 1
x = 90000 yards/day
Total time he can paint distance y = y^2/90000
(y/300)^2 = 2 after second day
y = 300*2^(1/2) = 424
Day 1 = 300
Day 2 = 424-300 = 124
Day 3 = 300*3^(1/2)-424 = 520 - 424 = 96
Ans : 300/124/96 assuming the first day its 300
I have applied different clustering algos like kmean, kmediod kmean-fast and expectation max clustering on my biomedical dataset using Rapidminer. now i want to check the performance of these algos that which algo gives better clustering results.
For that i have applied some operators like 'cluster density performance' and 'cluster distance performance' which gives me avg within cluster distance for each cluster and davis bouldin. but I am confused is it the right way to check clustering performance of each algo with these operators?
I am also interested in Silhouette method which i can apply for each algo to check performance but can't understand from where i can get b(i) and a(i) values from clustering algo output.
The most reliable way of evaluating clusterings is by looking at your data. If the clusters are of any use to you and make sense to a domain expert!
Never just rely on numbers.
For example, you can evaluate clusterings numerically by taking the within-cluster variance.
However, k-means optimizes exactly this value, so k-means will always come out best, and in fact this measure decreases with the number of k - but the results do not at all become more meaningful!
It is somewhat okay to use one coefficient such as Silhouette coefficient to compare results of the same algorithm this way. Since Silhouette coefficient is somewhat orthogonal to the variance minimization, it will make k-means stop at some point, when the result is a reasonable balance of the two objectives.
However, applying such a measure to different algorithms - which may have a different amount of correlation to the measure - is inherently unfair. Most likely, you will be overestimating one algorithm and underestimating the performance of another.
Another popular way is external evaluation, with labeled data. While this should be unbiased against the method -- unless the labels were generated by a similar method -- it has different issues: it will punish a solution that actually discovers new clusters!
All in all, evaluation of unsupervised methods is hard. Really hard. Best you can do, is see if the results prove useful in practise!
It's very good advice never to rely on the numbers.
All the numbers can do is help you focus on particular clusterings that are mathematically interesting. The Davies-Bouldin validity measure is nice because it will show a minimum when it thinks the clusters are most compact with respect to themselves and most separated with respect to others. If you plot a graph of the Davies-Bouldin measure as a function of k, the "best" clustering will show up as a minimum. Of course the data may not form into spherical clusters so this measure may not be appropriate but that's another story.
The Silhouette measure tends to a maximum when it identifies a clustering that is relatively better than another.
The cluster density and cluster distance measures often exhibit an "elbow" as they tend to zero. This elbow often coincides with an interesting clustering (I have to be honest and say I'm never really convinced by this elbow criterion approach).
If you were to plot different validity measures as a function of k and all of the measures gave an indication that a particular k was better then others that would be good reason to consider that value in more detail to see if you, as the domain expert for the data, agree.
If you're interested I have some examples here.
There are many ways to evaluate the performance of clustering models in machine learning. They are broadly divided into 3 categories-
1. Supervised techniques
2. Unsupervised techniques
3. Hybrid techniques
Supervised techniques are evaluated by comparing the value of evaluation metrics with some pre-defined ground rules and values.
For example- Jaccard similarity index, Rand Index, Purity etc.
Unsupervised techniques comprised of some evaluation metrics which cannot be compared with pre-defined values but they can be compared among different clustering models and thereby we can choose the best model.
For example - Silhouette measure, SSE
Hybrid techniques are nothing but combination of supervised and unsupervised methods.
Now, let’s have a look at the intuition behind these methods-
Silhouette measure
Silhouette measure is derived from 2 primary measures- Cohesion and Separation.
Cohesion is nothing but the compactness or tightness of the data points within a cluster.
There are basically 2 ways to compute the cohesion-
· Graph based cohesion
· Prototype based cohesion
Let’s consider that A is a cluster with 4 data points as shown in the figure-
Graph based cohesion computes the cohesion value by adding the distances (Euclidean or Manhattan) from each point to every other points.
Here,
Graph Cohesion(A) = Constant * ( Dis(1,2) + Dis(1,3) + Dis(1,4) + Dis(2,3) + Dis(2,4) + Dis(3,4) )
Where,
Constant = 1/ (2 * Average of all distances)
Prototype based cohesion is calculated by adding the distance of all data points from a commonly accepted point like centroid.
Here, let’s consider C as centroid in cluster A
Then,
Prototype Cohesion(A) = Constant * (Dis(1,C) +Dis(2,C) + Dis(3,C) + Dis(4,C))
Where,
Constant = 1/ (2 * Average of all distances)
Separation is the distance or magnitude of difference between the data points of 2 different clusters.
Here also, we have primarily 2 kinds of methods of computation of separation value.
1. Graph based separation
2. Prototype based separation
Graph based separation calculate the value by adding the distance between all points from Cluster 1 to each and every point in Cluster 2.
For example, If A and B are 2 clusters with 4 data points each then,
Graph based separation = Constant * ( Dis(A1,B1) + Dis(A1,B1) + Dis(A1,B2) + Dis(A1,B3) + Dis(A1,B4) + Dis(A2,B1) + Dis(A2,B2) + Dis(A2,B3) + Dis(A2,B4) + Dis(A3,B1) + Dis(A3,B2) + Dis(A3,B3) + Dis(A3,B4) + Dis(A4,B1) + Dis(A4,B2) + Dis(A4,B3) + Dis(A4,B4) )
Where,
Constant = 1/ number of clusters
Prototype based separation is calculated by finding the distance between the commonly accepted points of a 2 clusters like centroid.
Here, we can simply calculate the distance between the centroid of 2 cluster A and B i.e. Dis(C(A),C(B)) multiplied by a constant where constant = 1/ number of clusters.
Silhouette measure = (b-a)/max(b,a)
Where,
a = cohesion value
b = separation value
If Silhouette measure = -1 then it means clustering is very poor.
If Silhouette measure = 0 then it means clustering is good but some improvements are still possible.
If Silhouette measure = 1 then it means clustering is very good.
When we have multiple clustering algorithms, it is always recommended to choose the one with high Silhouette measure.
SSE ( Sum of squared errors)
SSE is calculated by adding Cohesion and Separation values.
SSE = Value ( Cohesion) + Value ( Separation).
When we have multiple clustering algorithms, it is always recommended to choose the one with low SSE.
Jaccard similarity index
Jaccard similarity index is measured using the labels in data points. If data points are not provided then we cannot measure this index.
The data points are divided into 4 categories-
True Negative (TN)= Data points with same class and different cluster
True Positive (TP) = Data points with same class and same cluster
False Negative (FN) = Data points with same class and different cluster
False Positive (FP) = Data points with different class and same cluster
Here,
Note - nC2 means number of combinations with 2 elements possible from a set containing n elements
nC2 = n*(n-1)/2
TP = 5C2 + 4C2 + 2C2 + 3C2 = 20
FN = 5C1 * 1C1 + 5C1 * 2C1 + 1C1 * 4C1 + 1C1 * 2C1 + 1C1 * 3C1 = 24
FP = 5C1 * 1C1 + 4C1 * 1C1 +4C1 * 1C1 + 1C1 * 1C1 +3C1 * 2C1 = 20
TN= 5C1 * 4C1 + 5C1 * 1C1 + 5C1 * 3C1 + 1C1 * 1C1 + 1C1 * 1C1 + 1C1 * 2C1 + 1C1 * 3C1 + 4C1 * 3C1 + 4C1 * 2C1 + 1C1 * 3C1 + 1C1 * 2C1 = 72
Jaccard similarity index = TP/ (TP + TN + FP +FN)
Here, Jaccard similarity index = 20 / (20+ 72 + 20 + 24) = 0.15
Rand Index
Rand Index is similar to Jaccard similarity index. Its formula is given by-
Rand Index = (TP + TN) / (TP + TN + FP +FN)
Here, Rand Index = (20 + 72) / (20+ 72 + 20 + 24) = 0.67
When Rand index is above 0.7, it can be considered as a good clustering.
Similarly, when Jaccard similarity index is above 0.5, it can be considered as good clustering.
Purity
This metrics also requires labels in the data. The formula is given by-
Purity = (Number of data points belonging to the label which is maximum in Cluster 1 + Number of data points belonging to the label which is maximum in Cluster 2 +.... + Number of data points belonging to the label which is maximum in Cluster n ) / Total number of data points.
For example, lets consider 3 clusters – A , B and C with labelled data points
Purity = (a + b + c) / n
Where,
a = Number of black circles in cluster A (Since black is the maximum in count)
b = Number of red circles in cluster B (Since red is the maximum in count)
c = Number of green circles in cluster C (Since green is the maximum in count)
n = Total number of data points
Here, Purity = (5 + 6 + 3) / (8 + 9 + 5) = 0.6
If purity is greater than 0.7 then it can be considered as a good clustering.
Original source - https://qr.ae/pNsxIX