I was wondering if anyone knows how to assign marks in spatstat so that they tend to cluster spatially? I have a set of lat long coordinates that I want to categorize into 4 groups. I have figured out how to randomly assign marks/groups to these points using the following code:
as.ppp(data, window ,marks=factor(sample(1:4,replace=TRUE)))
But I can't figure out how to assign the marks so that groups tend to occupy points closer to one another. As a further complication, I would also like the number of points within each group to be the same, specified number each time. Does anyone have any leads? Thanks in advance!
Typically in spatstat we define models which describe/generate points at random locations and possibly with random marks. If I understand you correctly you have a fixed set of locations and you simply want to assign random marks. How many points do you have? If you don't have too many points a simple suggestion could be to generate a multivariate normally distributed variable and then take the n_1 lowest values for the first mark, the n_2 next values for the second mark, and so on. A simple example with 4 equal sized groups of points:
library(spatstat)
library(mvtnorm)
set.seed(42) # Make reproducible
X <- redwood # Example data
n <- npoints(redwood)
Xdist <- pairdist(X) # n x n matrix of distances in X
decay_rate <- 1 # Parameter for covariance sturcture
sigma <- exp(-decay_rate * Xdist)
m <- rmvnorm(1, rep(0, n), sigma)
breaks <- quantile(m, probs = c(0, .25, .5, .75, 1)) # breaks to cut marks in four equal sized groups
marks(X) <- cut(m, breaks = breaks, include.lowest=TRUE, labels = 1:4)
plot(X)
Related
The basic definition of random variable is that it is a function based on random experiment.the question is that if it is a function say f then how can it take numerical values..
Suppose if we toss two coins and X be random variable relating no. of heads with (0,1,2) .For event of two heads say w....we have X(w)=2 is value of function X at w. and not of X itself..
But sometimes it is written that x is a r .v taking values 0,1,2,....
Don't it sound wrong to say function and takes values?
A random variable is a well defined function X: E -> R, whose domain E is a probability space and its codomain is (generally speaking) the set of real numbers.
Intuitively, X is some kind of metric or measurement on the elements of E.
Example 1
Let E be the set of users of Stack Overflow at a given point in time, say right now. And let X be the function that assigns their reputation to every SO user. For example, you could calculate P(X >= 5000) which is the percent of SO users with a reputation of 5000 or more.
Notice that P(X >= 5000) is nothing but a compact notation for the subset of E defined as:
{u in E | X(u) >= 5000}
meaning the subset of SO users u with a reputation of 5000 or more.
Example 2
Let E be the set of questions in SO and X the function that assigns the number of votes (at certain point in time) to each question. If you pick one question q at random, X(q) would be its number of votes and we could ask for the probability of, say, X < 0 (down-voted questions.)
Here the subset of such questions is
{q in E | X(q) < 0}
i.e., the subset of questions q having a negative vote count.
Conclusion
There is nothing random in a Random variable. The randomness is in the way we pick elements (or subsets) from its domain.
Speaking of functions - Yes, it is safe to say that a function can take certain values. Speaking of random variables and probability, the definition I know is:
A random variable assigns a numerical value to each possible outcome of a random experiment
This definition does indeed say that X (aka random variable) is a function. In your case, where it is said that X (as in function) can take values 0,1,2 is basically saying that the subset of the codomain (or even the codomain or target set itself) of function X is the set {0,1,2}, or interval
[0,2] ⊂ ℕ.
I edited my question trying to make it as short and precise.
I am developing a prototype of a facial recognition system for my Graduation Project. I use Eigenface and my main source is the document Turk and Pentland. It is available here: http://www.face-rec.org/algorithms/PCA/jcn.pdf.
My doubts focus on step 4 and 5.
I can not correctly interpret the number of thresholds: If two types of thresholds, or only one (Notice that the text speaks of two types but uses the same symbol). And again, my question is whether this (or these) threshold(s) is unique and global for all person or if each person has their own default.
I understand the steps to be calculated until an matrix O() of classes with weights or weighted. So this matrix O() is of dimension M'x P. Since M' equal to the amount of eigenfaces chosen and P the number of people.
What follows and confuses me. He speaks of two distances: the distance of a class against another, and also from a distance of one face to another. I call it D1 and D2 respectively. NOTE: In the training set there are M images in total, with F = M / P the number of images per person.
I understand that threshold(s) should be chosen empirically. But there must be a way to approximate. I was initially designing a matrix of distances D1() of dimension PxP. Where the row vector D(i) has the distances from the vector average class O(i) to each O(j), j = 1..P. Ie a "all vs all."
Until I came here, and what follows depends on whether I should actually choose a single global threshold for all. Or if I should be chosen for each individual value. Also not if they are 2 types: one for distance classes, and one for distance faces.
I have a theory as could proceed but not so supported by the concepts of Turk:
Stage Pre-Test:
Gender two matrices of distances D1 and D2:
In D1 would be stored distances between classes, and in D2 distances between faces. This basis of the matrices W and A respectively.
Then, as indeed in the training set are P people, taking the F vectors columns D1 for each person and estimate a threshold T1 was in range [Min, Max]. Thus I will have a T1(i), i = 1..P
Separately have a T2 based on the range [Min, Max] out of all the matrix D2. This define is a face or not.
Step Test:
Buid a test set of image with a 1 image for each known person
Itest = {Itest(1) ... Itest(P)}
For every image Itest(i) test:
Calculate the space face Atest = Itest - Imean
Calculate the weight vector Otest = UT * Atest
Calculating distances:
dist1(j) = distance(Otest, O (j)), j = 1..P
Af = project(Otest, U)
dist2 = distance(Atest, Af)
Evaluate recognition:
MinDist = Min(dist1)
For each j = 1..P
If dist2 > T2 then "not is face" else:
If MinDist <= T1(j) then "Subject identified as j" else "subject unidentified"
Then I take account of TFA and TFR and repeat the test process with different threshold values until I find the best approach gives to each person.
Already defined thresholds can put the system into operation unknown images. The algorithm is similar to the test.
I know I get out of "script" of the official documentation but at least this reasoning is the most logical place my head. I wondered if I could give guidance.
EDIT:
i No more to say that has not already been said and that may help clarify things.
Could anyone tell me if I'm okay tackled with my "theory"? I'm moving into my project, and if this is not the right way would appreciate some guidance and does not work and you wrong.
I'm looking for a way of getting X points in a fixed sized grid of let's say M by N, where the points are not returned multiple times and all points have a similar chance of getting chosen and the amount of points returned is always X.
I had the idea of looping over all the grid points and giving each point a random chance of X/(N*M) yet I felt like that it would give more priority to the first points in the grid. Also this didn't meet the requirement of always returning X amount of points.
Also I could go with a way of using increments with a prime number to get kind of a shuffle without repeat functionality, but I'd rather have it behave more random than that.
Essentially, you need to keep track of the points you already chose, and make use of a random number generator to get a pseudo-uniformly distributed answer. Each "choice" should be independent of the previous one.
With your first idea, you're right, the first ones would have more chance of getting picked. Consider a one-dimensional array with two elements. With the strategy you mention, the chance of getting the first one is:
P[x=0] = 1/2 = 0.5
The chance of getting the second one is the chance of NOT getting the first one 0.5, times 1/2:
P[x=1] = 1/2 * 1/2 = 0.25
You don't mention which programming language you're using, so I'll assume you have at your disposal random number generator rand() which results in a random float in the range [0, 1), a Hashmap (or similar) data structure, and a Point data structure. I'll further assume that a point in the grid can be any floating point x,y, where 0 <= x < M and 0 <= y < N. (If this is a NxM array, then the same applies, but in integers, and up to (M-1,N-1)).
Hashmap points = new Hashmap();
Point p;
while (items.size() < X) {
p = new Point(rand()*M, rand()*N);
if (!points.containsKey(p)) {
items.add(p, 1);
}
}
Note: Two Point objects of equal x and y should be themselves considered equal and generate equal hash codes, etc.
Lets say I have a matrix x=[ 1 2 1 2 1 2 1 2 3 4 5 ]. To look at its histogram, I can do h=hist(x).
Now, h with retrieve a matrix consisting only the number of occurrences and does not store the original value to which it occurred.
What I want is something like a function which takes a value from x and returns number of occurrences of it. Having said that, what one thing histeq does should we admire is, it automatically scales nearest values according!
How should solve this issue? How exactly people do it?
My reason of interest is in images:
Lets say I have an image. I want to find all number of occurrences of a chrominance value of image.
I'm not really shure what you are looking for, but if you ant to use hist to count the number of occurences, use:
[h,c]=hist(x,sort(unique(x)))
Otherwise hist uses ranges defined by centers. The second output argument returns the corresponding number.
hist has a second return value that will be the bin centers xc corresponding to the counts n returned in form of the first return value: [n, xc] = hist(x). You should have a careful look at the reference which describes a large number of optional arguments that control the behavior of hist. However, hist is way too mighty for your specific problem.
To simply count the number of occurrences of a specific value, you could simply use something like sum(x(:) == 42). The colon operator will linearize your image matrix, the equals operator will yield a list of boolean values with 1 for each element of x that was 42, and thus sum will yield the total number of these occurrences.
An alternative to hist / histc is to use bsxfun:
n = unique(x(:)).'; %'// values contained in x. x can have any number of dims
y = sum(bsxfun(#eq, x(:), n)); %// count for each value
I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.