Ordering a varlist in Stata code - sorting

I see the following as a programming exercise, rather than a statistically grounded way of doing things.
Basically, I'd like to run N logistic regressions with one predictor variable and then for each variable store the variable name with its chi-squared value. After all predictions are done, I want to display each predictor variable ordered by chi-squared from highest to lowest.
So far I have the following:
local depvar binvar1
local indepvars predvar1 predvar2 predvar3
* expand and check collinearity *
_rmdcoll `depvar' `indepvars', expand
local indepvars "`r(varlist)'"
* first order individual variables by best chi-squared *
local vars
local chis
foreach v in `indepvars' {
di "RUN: logistic `depvar' `v'"
quietly logistic `depvar' `v'
* check if variable is not omitted (constant and iv) *
if `e(rank)' < 2 {
di "OMITTED (rank < 2): `v'"
continue
}
* check if chi-squared is > 0 *
if `e(chi2)' <= 0 {
di "OMITTED (chi2 <= 0): `v'"
continue
}
* store *
local vars "`vars' `v'"
local chis "`chis' `e(chi2)'"
di "ADDED: `v' (chi2: `e(chi2)')"
}
* ... now sort each variable (from varlist vars) by chi2 (from varlist chis) ... *
How would I sort each variable by the returned chi-square in the last line and then display the list of variables with their chi-squared ordered from highest chi-squared to lowest chi-squared?
To be clear, if the following varlists resulted from the above:
local vars predvar1 predvar2 predvar3
local chis 2 3 1
Then I would like to get something like the following:
local ordered predvar2 3 predvar1 2 predvar3 1
Or, alternatively,
local varso predvar2 predvar1 predvar3
local chiso 3 2 1

Here is one way to do it.
local depvar binvar1
local indepvars predvar1 predvar2 predvar3
* expand and check collinearity *
_rmdcoll `depvar' `indepvars', expand
local indepvars "`r(varlist)'"
* first order individual variables by best chi-squared *
gen chisq = .
gen vars = ""
local i = 1
foreach v in `indepvars' {
di "RUN: logistic `depvar' `v'"
quietly logistic `depvar' `v'
* check if variable is not omitted (constant and iv) *
if `e(rank)' < 2 {
di "OMITTED (rank < 2): `v'"
}
* check if chi-squared is > 0 *
else if `e(chi2)' <= 0 {
di "OMITTED (chi2 <= 0): `v'"
}
* store *
else {
quietly replace vars = "`v'" in `i'
quietly replace chisq = -e(chi2) in `i'
local ++i
di "ADDED: `v' (chi2: `e(chi2)')"
}
}
sort chisq
replace chisq = -chisq
l vars chisq if chisq < ., noobs

Related

Why does the code terminate with a "Solution Not Found" error and "EXIT: Converged to a point of local infeasibility. Problem may be infeasible"?

I cannot seem to figure out why IPOPT cannot find a solution to this. Initially, I thought the problem was totally infeasible but when I reduce the value of col_total to any number below 161000 or comment out the last constraint equation that contains col_total, it solves and EXITs with an Optimal Solution Found and a final objective value function of -161775.256826753. I have solved the same Maximization problem using Artificial Bee Colony and Particle Swamp Optimization techniques, and they solve and return optimal objective value function at least 225000 and 226000 respectively. Could it be that another solver is required? I have also tried APOPT, BPOPT, and IPOPT and have tinkered around with the tolerance values, but no combination none seems to work just yet. The code is posted below. Any guidance will be hugely appreciated.
from gekko import GEKKO
import numpy as np
distances = np.array([[[0, 0],[0,0],[0,0],[0,0]],\
[[155,0],[0,0],[0,0],[0,0]],\
[[310,0],[155,0],[0,0],[0,0]],\
[[465,0],[310,0],[155,0],[0,0]],\
[[620,0],[465,0],[310,0],[155,0]]])
alpha = 0.5 / np.log(30/0.075)
diam = 31
free = 7
rho = 1.2253
area = np.pi * (diam / 2)**2
min_v = 5.5
axi_max = 0.32485226746
col_total = 176542.96546512868
rat = 14
nn = 5
u_hub_lowerbound = 5.777777777777778
c_pow = 0.59230249
p_max = 0.5 * rho * area * c_pow * free**3
# Initialize Model
m = GEKKO(remote=True)
#initialize variables, Set lower and upper bounds
x = [m.Var(value = 0.03902278, lb = 0, ub = axi_max) \
for i in range(nn)]
# i = 0
b = 1
c = 0
v_s = list()
for i in range(nn-1): # Loop runs for nn-1 times
# print(i)
# print(i,b,c)
squared_defs = list()
while i < b:
d = distances[b][c][0]
r = distances[b][c][1]
ss = (2 * (alpha * d) / diam)
tt = r / ((diam/2) + (alpha * d))
squared_defs.append((2 * x[i] / (1 + ss**2)) * np.exp(-(tt**2)) ** 2)
i+=1
c+=1
#Equations
m.Equation((free * (1 - (sum(squared_defs))**0.5)) - rat <= 0)
m.Equation((free * (1 - (sum(squared_defs))**0.5)) - u_hub_lowerbound >= 0)
v_s.append(free * (1 - (sum(squared_defs))**0.5))
squared_defs.clear()
b+=1
c=0
# Inserts free as the first item on the v_s list to
# increase len(v_s) to nn, so that 'v_s' and 'x'
# are of same length
v_s.insert(0, free)
gamma = list()
for i in range(len(x)):
bet = (4*x[i]*((1-x[i])**2) * rho * area) / 2
gam = bet * v_s[i]**3
gamma.append(gam)
#Equations
m.Equation(x[i] - axi_max <= 0)
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) \
* v_s[i]**3) - p_max <= 0)
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) * \
v_s[i]**3) > 0)
#Equation
m.Equation(col_total - sum(gamma) <= 0)
#Objective
y = sum(gamma)
m.Maximize(y) # Maximize
#Set global options
m.options.IMODE = 3 #steady state optimization
#Solve simulation
m.options.SOLVER = 3
m.solver_options = ['linear_solver ma27','mu_strategy adaptive','max_iter 2500', 'tol 1.0e-5' ]
m.solve()
Built the equations without .value in the expressions. The x[i].value is only needed at the end to view the solution after the solution is complete or to initialize the value of x[i]. The expression m.Maximize(y) is more readable than m.Obj(-y) although they are equivalent.
from gekko import GEKKO
import numpy as np
distances = np.array([[[0, 0],[0,0],[0,0],[0,0]],\
[[155,0],[0,0],[0,0],[0,0]],\
[[310,0],[155,0],[0,0],[0,0]],\
[[465,0],[310,0],[155,0],[0,0]],\
[[620,0],[465,0],[310,0],[155,0]]])
alpha = 0.5 / np.log(30/0.075)
diam = 31
free = 7
rho = 1.2253
area = np.pi * (diam / 2)**2
min_v = 5.5
axi_max = 0.069262150781
col_total = 20000
p_max = 4000
rat = 14
nn = 5
# Initialize Model
m = GEKKO(remote=True)
#initialize variables, Set lower and upper bounds
x = [m.Var(value = 0.03902278, lb = 0, ub = axi_max) \
for i in range(nn)]
i = 0
b = 1
c = 0
v_s = list()
for turbs in range(nn-1): # Loop runs for nn-1 times
squared_defs = list()
while i < b:
d = distances[b][c][0]
r = distances[b][c][1]
ss = (2 * (alpha * d) / diam)
tt = r / ((diam/2) + (alpha * d))
squared_defs.append((2 * x[i] / (1 + ss**2)) \
* m.exp(-(tt**2)) ** 2)
i+=1
c+=1
#Equations
m.Equation((free * (1 - (sum(squared_defs))**0.5)) - rat <= 0)
m.Equation(min_v - (free * (1 - (sum(squared_defs))**0.5)) <= 0 )
v_s.append(free * (1 - (sum(squared_defs))**0.5))
squared_defs.clear()
b+=1
a=0
c=0
# Inserts free as the first item on the v_s list to
# increase len(v_s) to nn, so that 'v_s' and 'x'
# are of same length
v_s.insert(0, free)
beta = list()
gamma = list()
for i in range(len(x)):
bet = (4*x[i]*((1-x[i])**2) * rho * area) / 2
gam = bet * v_s[i]**3
#Equations
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) \
* v_s[i]**3) - p_max <= 0)
m.Equation((((4*x[i]*((1-x[i])**2) * rho * area) / 2) \
* v_s[i]**3) > 0)
gamma.append(gam)
#Equation
m.Equation(col_total - sum(gamma) <= 0)
#Objective
y = sum(gamma)
m.Maximize(y) # Maximize
#Set global options
m.options.IMODE = 3 #steady state optimization
#Solve simulation
m.options.SOLVER = 3
m.solve()
This gives a successful solution with maximized objective 20,000:
Number of Iterations....: 12
(scaled) (unscaled)
Objective...............: -4.7394814741924645e+00 -1.9999999999929641e+04
Dual infeasibility......: 4.4698510326511536e-07 1.8862194343304290e-03
Constraint violation....: 3.8275766582203308e-11 1.2941979026166479e-07
Complementarity.........: 2.1543608536533588e-09 9.0911246952931704e-06
Overall NLP error.......: 4.6245685940749926e-10 1.8862194343304290e-03
Number of objective function evaluations = 80
Number of objective gradient evaluations = 13
Number of equality constraint evaluations = 80
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 13
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 12
Total CPU secs in IPOPT (w/o function evaluations) = 0.010
Total CPU secs in NLP function evaluations = 0.011
EXIT: Optimal Solution Found.
The solution was found.
The final value of the objective function is -19999.9999999296
---------------------------------------------------
Solver : IPOPT (v3.12)
Solution time : 3.210000000399305E-002 sec
Objective : -19999.9999999296
Successful solution
---------------------------------------------------

subscript indices must be either positiveintegers less than 2^31 or logicals

SOS i keep getting errors in the loop solving by finite difference method.
I either get the following error when i start with i = 2 : N :
diffusion: A(I,J): row index out of bounds; value 2 out of bound 1
error: called from
diffusion at line 37 column 10 % note line change due to edit!
or, I get the following error when i do i = 2 : N :
subscript indices must be either positive integers less than 2^31 or logicals
error: called from
diffusion at line 37 column 10 % note line change due to edit!
Please help
clear all; close all;
% mesh in space
dx = 0.1;
x = 0 : dx : 1;
% mesh in time
dt = 1 / 50;
t0 = 0;
tf = 10;
t = t0 : dt : tf;
% diffusivity
D = 0.5;
% number of nodes
N = 11;
% number of iterations
M = 10;
% initial conditions
if x <= .5 && x >= 0 % note, in octave, you don't need parentheses around the test expression
u0 = x;
elseif
u0 = 1-x;
endif
u = u0;
alpha = D * dt / (dx^2);
for j = 1 : M
for i = 1 : N
u(i, j+1) = u(i, j ) ...
+ alpha ...
* ( u(i-1, j) ...
+ u(i+1, j) ...
- 2 ...
* u(i, j) ...
) ;
end
u(N+1, j+1) = u(N+1, j) ...
+ alpha ...
* ( ...
u(N, j) ...
- 2 ...
* u(N+1, j) ...
+ u(N, j) ...
) ;
% boundary conditions
u(0, :) = u0;
u(1, :) = u1;
u1 = u0;
u0 = 0;
end
% exact solution with 14 terms
%k=14 % COMMENTED OUT
v = (4 / ((k * pi) .^ 2)) ...
* sin( (k * pi) / 2 ) ...
* sin( k * pi * x ) ...
* exp .^ (D * ((k * pi) ^ 2) * t) ;
exact = symsum( v, k, 1, 14 );
error = exact - u;
% plot stuff
plot( t, error );
xlabel( 'time' );
ylabel( 'error' );
legend( 't = 1 / 50' );
Have a look at the edited code I cleaned up for you above and study it.
Don't underestimate the importance of clean, readable code when hunting for bugs.
It will save you more time than it will cost. Especially a week from now when you will need to revisit this code and you will not remember at all what you were trying to do.
Now regarding your errors. (all line references are with respect to the cleaned up code above)
Scenario 1:
In line 29 you initialise u as a single value.
If you start your loop in line 35 starting with i = 2, then as soon as you try to do u(i, j+1), i.e. u(2,2) in the next line, octave will complain that you're trying to index the second row, in an array that so far only contains one row. (in fact, the same will apply for j at this point, since at this point you only have one column as well)
Scenario 2:
I assume the second scenario was a typo and you meant to say i = 1 : N.
If you start with i=1 in the loop, then have a look at line 38: you are trying to get element u(i-1, j), i.e. u(0,1). Therefore octave will complain that you're trying to get the zero element, but in octave arrays start from one and zero is not defined. Attempting to access any array with a zero will result in the error you see (try it in a terminal!).
UPDATE
Also, now that the code is clean, you can spot another bug, which octave helpfully warns you about if you try to run the code.
Look at line 26. There is NO condition in the elseif leg, so octave looks for the next statement as the test condition.
This means that the elseif condition will always succeed as long as the result of u0 = 1-x is non-zero.
This is clearly a bug. Either you forgot to put the condition for the elseif, or more likely, you probably just meant to say else, rather than elseif.

Matrix has unexpected missing cells when collecting z-statistics from mlogit

I wish to run a series of multinomial logits (600ish per covariate of interest) and gather the z-statistics from each of these (I do not care about the order in which these are recorded).
These mlogits are run on a small piece of my data (sharing a group ID). The mlogits have a varying number of outcomes involved (n), and there will be (n - 1) z statistics to gather from each mlogit. Each mlogit takes the form: y = a + _b*x + \epsilon where y can take on between 2 and 9 values (in my data), although the mean is 3.7.
I believe the difficulty comes in pulling these z-stats out of the mlogit, as there is no way I know to directly call a matrix of z-stats. My solution is to construct the z-stats from the e(V) and e(b) matrices. For each iteration of the mlogit, I construct a matrix of z-stats; I then append this to the previous matrix of z-stats (thereby building a matrix of all of them calculated). Unfortunately, my code does not seem to do this properly.
The symptoms are as follows. The matrix mat_covariate includes many missing values (over half of the matrix values have been missing in the troubleshooting I have done). It also includes many zeroes (which are possible, but unlikely - especially at this rate, about 16%). As written, the code does not yet suppress the mlogits I run, and so I can go back and check what makes it into the matrix. At most one value from each mlogit is recorded, but these are often recorded multiple times. 40% of the mlogits had nothing recorded.
The relevant loop is below:
local counter = 1
forvalues i = 1/`times' {
preserve
keep if group_id==`i'
foreach covariate in `covariates' {
if `counter' == 1 {
mlogit class `covariate'
sum outcomes_n, meanonly
local max = `r(max)'
local max_minus = `max' - 1
matrix mat_`covariate' = J(`max_minus',1,0)
forvalues j = 1/`max_minus' {
mat V = e(V)
mat b = e(b)
local z = b[1+2*(`j'-1),1] / ( V[1+2*(`j'-1),1+2*(`j'-1)] ) ^ (.5)
matrix mat_`covariate'[`j',1] = `z'
}
}
else {
mlogit class `covariate'
sum outcomes_n, meanonly
local max `r(max)'
local max_minus = `max' - 1
matrix mat_`covariate'_temp = J(`max_minus',1,0)
forvalues j = 1/`max_minus' {
mat V = e(V)
mat b = e(b)
local z = b[1+2*(`j'-1),1] / ( V[1+2*(`j'-1),1+2*(`j'-1)] ) ^ (.5)
matrix mat_`covariate'_temp[`j',1] = `z'
matrix mat_`covariate' = mat_`covariate' \ mat_`covariate'_temp
}
matrix mat_`covariate' = mat_`covariate' \ mat_`covariate'_temp
}
}
local counter = `counter'+1
restore
}
Some reasons for why I did some of the things in the loop. I believe these things work, but they are not my first instincts, and I am unclear why my first instinct does not work. If there's a simpler/more elegant way to solve them, that would be a nice bonus:
the main if/else (and the counter) is to solve the issue that I cannot define a matrix as a function of itself when it has not yet been defined.
I define a local for the max, and a separate one for the (max-1). The forvalues loop would not accept "1/(`max'-1) {" and I am unsure why.
I created some sample data that can be used to replicate this problem. Below is code for a .do file which sets up data, locals for the loop, the loop above, and demonstrates the symptoms by displaying the matrix in question:
clear all
version 14
//================== sample data: ==================
set obs 500
set seed 12345
gen id = _n
gen group_id = .
replace group_id = 1 if id <= 50
replace group_id = 2 if id <= 100 & missing(group_id)
replace group_id = 3 if id <= 150 & missing(group_id)
replace group_id = 4 if id <= 200 & missing(group_id)
replace group_id = 5 if id <= 250 & missing(group_id)
replace group_id = 6 if id <= 325 & missing(group_id)
replace group_id = 7 if id <= 400 & missing(group_id)
replace group_id = 8 if id <= 500 & missing(group_id)
gen temp_subgroup_id = .
replace temp_subgroup_id = floor((3)*runiform() + 2) if group_id < 6
replace temp_subgroup_id = floor((4)*runiform() + 2) if group_id < 8 & missing(temp_subgroup_id)
replace temp_subgroup_id = floor((5)*runiform() + 2) if missing(temp_subgroup_id)
egen subgroup_id = group(group_id temp_subgroup_id)
bysort subgroup_id : gen subgroup_size = _N
bysort group_id subgroup_id : gen tag = (_n == 1)
bysort group_id : egen outcomes_n = total(tag)
gen binary_x = floor(2*runiform())
//================== locals: ==================
local covariates binary_x
local times = 8
// times is equal to the number of group_ids
//================== loop in question: ==================
local counter = 1
forvalues i = 1/`times' {
preserve
keep if group_id==`i'
foreach covariate in `covariates' {
if `counter' == 1 {
mlogit subgroup_id `covariate'
sum outcomes_n, meanonly
local max = `r(max)'
local max_minus = `max' - 1
matrix mat_`covariate' = J(`max_minus',1,0)
forvalues j = 1/`max_minus' {
mat V = e(V)
mat b = e(b)
local z = b[1+2*(`j'-1),1] / ( V[1+2*(`j'-1),1+2*(`j'-1)] ) ^ (.5)
matrix mat_`covariate'[`j',1] = `z'
}
}
else {
mlogit subgroup_id `covariate'
sum outcomes_n, meanonly
local max `r(max)'
local max_minus = `max' - 1
matrix mat_`covariate'_temp = J(`max_minus',1,0)
forvalues j = 1/`max_minus' {
mat V = e(V)
mat b = e(b)
local z = b[1+2*(`j'-1),1] / ( V[1+2*(`j'-1),1+2*(`j'-1)] ) ^ (.5)
matrix mat_`covariate'_temp[`j',1] = `z'
matrix mat_`covariate' = mat_`covariate' \ mat_`covariate'_temp
}
matrix mat_`covariate' = mat_`covariate' \ mat_`covariate'_temp
}
}
local counter = `counter' + 1
restore
}
//================== symptoms: ==================
matrix list mat_binary_x
I'm trying to figure out what is wrong in my code, but have been unable to find the issue (although I've found some other smaller errors, but none that have had an impact on the main problem - I would be unsurprised if there are multiple bugs).
Consider the simplest case when i == 1 and max_minus == 2:
preserve
keep if group_id == 1
summarize outcomes_n, meanonly
local max = `r(max)'
local max_minus = `max' - 1
mlogit subgroup_id binary_x
matrix V = e(V)
matrix b = e(b)
This produces the following:
. matrix list V
symmetric V[6,6]
1: 1: 2: 2: 3: 3:
o. o.
binary_x _cons binary_x _cons binary_x _cons
1:binary_x .46111111
1:_cons -.225 .225
2:o.binary_x 0 0 0
2:o._cons 0 0 0 0
3:binary_x .2111111 -.09999999 0 0 .47896825
3:_cons -.09999999 .09999999 0 0 -.24285714 .24285714
. matrix list b
b[1,6]
1: 1: 2: 2: 3: 3:
o. o.
binary_x _cons binary_x _cons binary_x _cons
y1 .10536052 -.22314364 0 0 .23889194 -.35667502
. local j = `max_minus'
. display "z = `= b[1+2*(`j'-1),1] / ( V[1+2*(`j'-1),1+2*(`j'-1)] ) ^ (.5)'"
z = .
The value of z is missing because you are dividing the value of a row in the
matrix e(b) that does not exist. In other words, your loops are
not set up correctly and substitute incorrect values.

How can I find the center of a cluster of data points?

Let's say I plotted the position of a helicopter every day for the past year and came up with the following map:
Any human looking at this would be able to tell me that this helicopter is based out of Chicago.
How can I find the same result in code?
I'm looking for something like this:
$geoCodeArray = array([GET=http://pastebin.com/grVsbgL9]);
function findHome($geoCodeArray) {
// magic
return $geoCode;
}
Ultimately generating something like this:
UPDATE: Sample Dataset
Here's a map with a sample dataset: http://batchgeo.com/map/c3676fe29985f00e1605cd4f86920179
Here's a pastebin of 150 geocodes: http://pastebin.com/grVsbgL9
The above contains 150 geocodes. The first 50 are in a few clusters close to Chicago. The remaining are scattered throughout the country, including some small clusters in New York, Los Angeles, and San Francisco.
I have about a million (seriously) datasets like this that I'll need to iterate through and identify the most likely "home". Your help is greatly appreciated.
UPDATE 2: Airplane switched to Helicopter
The airplane concept was drawing too much attention toward physical airports. The coordinates can be anywhere in the world, not just airports. Let's assume it's a super helicopter not bound by physics, fuel, or anything else. It can land where it wants. ;)
The following solution works even if the points are scattered all over the Earth, by converting latitude and longitude to Cartesian coordinates. It does a kind of KDE (kernel density estimation), but in a first pass the sum of kernels is evaluated only at the data points. The kernel should be chosen to fit the problem. In the code below it is what I could jokingly/presumptuously call a Trossian, i.e., 2-d²/h² for d≤h and h²/d² for d>h (where d is the Euclidean distance and h is the "bandwidth" $global_kernel_radius), but it could also be a Gaussian (e-d²/2h²), an Epanechnikov kernel (1-d²/h² for d<h, 0 otherwise), or another kernel. An optional second pass refines the search locally, either by summing an independent kernel on a local grid, or by calculating the centroid, in both cases in a surrounding defined by $local_grid_radius.
In essence, each point sums all the points it has around (including itself), weighing them more if they are closer (by the bell curve), and also weighing them by the optional weight array $w_arr. The winner is the point with the maximum sum. Once the winner has been found, the "home" we are looking for can be found by repeating the same process locally around the winner (using another bell curve), or it can be estimated to be the "center of mass" of all points within a given radius from the winner, where the radius can be zero.
The algorithm must be adapted to the problem by choosing the appropriate kernels, by choosing how to refine the search locally, and by tuning the parameters. For the example dataset, the Trossian kernel for the first pass and the Epanechnikov kernel for the second pass, with all 3 radii set to 30 mi and a grid step of 1 mi could be a good starting point, but only if the two sub-clusters of Chicago should be seen as one big cluster. Otherwise smaller radii must be chosen.
function find_home($lat_arr, $lng_arr, $global_kernel_radius,
$local_kernel_radius,
$local_grid_radius, // 0 for no 2nd pass
$local_grid_step, // 0 for centroid
$units='mi',
$w_arr=null)
{
// for lat,lng <-> x,y,z see http://en.wikipedia.org/wiki/Geodetic_datum
// for K and h see http://en.wikipedia.org/wiki/Kernel_density_estimation
switch (strtolower($units)) {
/* */case 'nm' :
/*or*/case 'nmi': $m_divisor = 1852;
break;case 'mi': $m_divisor = 1609.344;
break;case 'km': $m_divisor = 1000;
break;case 'm': $m_divisor = 1;
break;default: return false;
}
$a = 6378137 / $m_divisor; // Earth semi-major axis (WGS84)
$e2 = 6.69437999014E-3; // First eccentricity squared (WGS84)
$lat_lng_count = count($lat_arr);
if ( !$w_arr) {
$w_arr = array_fill(0, $lat_lng_count, 1.0);
}
$x_arr = array();
$y_arr = array();
$z_arr = array();
$rad = M_PI / 180;
$one_e2 = 1 - $e2;
for ($i = 0; $i < $lat_lng_count; $i++) {
$lat = $lat_arr[$i];
$lng = $lng_arr[$i];
$sin_lat = sin($lat * $rad);
$sin_lng = sin($lng * $rad);
$cos_lat = cos($lat * $rad);
$cos_lng = cos($lng * $rad);
// height = 0 (!)
$N = $a / sqrt(1 - $e2 * $sin_lat * $sin_lat);
$x_arr[$i] = $N * $cos_lat * $cos_lng;
$y_arr[$i] = $N * $cos_lat * $sin_lng;
$z_arr[$i] = $N * $one_e2 * $sin_lat;
}
$h = $global_kernel_radius;
$h2 = $h * $h;
$max_K_sum = -1;
$max_K_sum_idx = -1;
for ($i = 0; $i < $lat_lng_count; $i++) {
$xi = $x_arr[$i];
$yi = $y_arr[$i];
$zi = $z_arr[$i];
$K_sum = 0;
for ($j = 0; $j < $lat_lng_count; $j++) {
$dx = $xi - $x_arr[$j];
$dy = $yi - $y_arr[$j];
$dz = $zi - $z_arr[$j];
$d2 = $dx * $dx + $dy * $dy + $dz * $dz;
$K_sum += $w_arr[$j] * ($d2 <= $h2 ? (2 - $d2 / $h2) : $h2 / $d2); // Trossian ;-)
// $K_sum += $w_arr[$j] * exp(-0.5 * $d2 / $h2); // Gaussian
}
if ($max_K_sum < $K_sum) {
$max_K_sum = $K_sum;
$max_K_sum_i = $i;
}
}
$winner_x = $x_arr [$max_K_sum_i];
$winner_y = $y_arr [$max_K_sum_i];
$winner_z = $z_arr [$max_K_sum_i];
$winner_lat = $lat_arr[$max_K_sum_i];
$winner_lng = $lng_arr[$max_K_sum_i];
$sin_winner_lat = sin($winner_lat * $rad);
$cos_winner_lat = cos($winner_lat * $rad);
$sin_winner_lng = sin($winner_lng * $rad);
$cos_winner_lng = cos($winner_lng * $rad);
$east_x = -$local_grid_step * $sin_winner_lng;
$east_y = $local_grid_step * $cos_winner_lng;
$east_z = 0;
$north_x = -$local_grid_step * $sin_winner_lat * $cos_winner_lng;
$north_y = -$local_grid_step * $sin_winner_lat * $sin_winner_lng;
$north_z = $local_grid_step * $cos_winner_lat;
if ($local_grid_radius > 0 && $local_grid_step > 0) {
$r = intval($local_grid_radius / $local_grid_step);
$r2 = $r * $r;
$h = $local_kernel_radius;
$h2 = $h * $h;
$max_L_sum = -1;
$max_L_sum_idx = -1;
for ($i = -$r; $i <= $r; $i++) {
$winner_east_x = $winner_x + $i * $east_x;
$winner_east_y = $winner_y + $i * $east_y;
$winner_east_z = $winner_z + $i * $east_z;
$j_max = intval(sqrt($r2 - $i * $i));
for ($j = -$j_max; $j <= $j_max; $j++) {
$x = $winner_east_x + $j * $north_x;
$y = $winner_east_y + $j * $north_y;
$z = $winner_east_z + $j * $north_z;
$L_sum = 0;
for ($k = 0; $k < $lat_lng_count; $k++) {
$dx = $x - $x_arr[$k];
$dy = $y - $y_arr[$k];
$dz = $z - $z_arr[$k];
$d2 = $dx * $dx + $dy * $dy + $dz * $dz;
if ($d2 < $h2) {
$L_sum += $w_arr[$k] * ($h2 - $d2); // Epanechnikov
}
}
if ($max_L_sum < $L_sum) {
$max_L_sum = $L_sum;
$max_L_sum_i = $i;
$max_L_sum_j = $j;
}
}
}
$x = $winner_x + $max_L_sum_i * $east_x + $max_L_sum_j * $north_x;
$y = $winner_y + $max_L_sum_i * $east_y + $max_L_sum_j * $north_y;
$z = $winner_z + $max_L_sum_i * $east_z + $max_L_sum_j * $north_z;
} else if ($local_grid_radius > 0) {
$r = $local_grid_radius;
$r2 = $r * $r;
$wx_sum = 0;
$wy_sum = 0;
$wz_sum = 0;
$w_sum = 0;
for ($k = 0; $k < $lat_lng_count; $k++) {
$xk = $x_arr[$k];
$yk = $y_arr[$k];
$zk = $z_arr[$k];
$dx = $winner_x - $xk;
$dy = $winner_y - $yk;
$dz = $winner_z - $zk;
$d2 = $dx * $dx + $dy * $dy + $dz * $dz;
if ($d2 <= $r2) {
$wk = $w_arr[$k];
$wx_sum += $wk * $xk;
$wy_sum += $wk * $yk;
$wz_sum += $wk * $zk;
$w_sum += $wk;
}
}
$x = $wx_sum / $w_sum;
$y = $wy_sum / $w_sum;
$z = $wz_sum / $w_sum;
$max_L_sum_i = false;
$max_L_sum_j = false;
} else {
return array($winner_lat, $winner_lng, $max_K_sum_i, false, false);
}
$deg = 180 / M_PI;
$a2 = $a * $a;
$e4 = $e2 * $e2;
$p = sqrt($x * $x + $y * $y);
$zeta = (1 - $e2) * $z * $z / $a2;
$rho = ($p * $p / $a2 + $zeta - $e4) / 6;
$rho3 = $rho * $rho * $rho;
$s = $e4 * $zeta * $p * $p / (4 * $a2);
$t = pow($s + $rho3 + sqrt($s * ($s + 2 * $rho3)), 1 / 3);
$u = $rho + $t + $rho * $rho / $t;
$v = sqrt($u * $u + $e4 * $zeta);
$w = $e2 * ($u + $v - $zeta) / (2 * $v);
$k = 1 + $e2 * (sqrt($u + $v + $w * $w) + $w) / ($u + $v);
$lat = atan($k * $z / $p) * $deg;
$lng = atan2($y, $x) * $deg;
return array($lat, $lng, $max_K_sum_i, $max_L_sum_i, $max_L_sum_j);
}
The fact that distances are Euclidean and not great-circle should have negligible effects for the task at hand. Calculating great-circle distances would be much more cumbersome, and would cause only the weight of very far points to be significantly lower - but these points already have a very low weight. In principle, the same effect could be achieved by a different kernel. Kernels that have a complete cut-off beyond some distance, like the Epanechnikov kernel, don't have this problem at all (in practice).
The conversion between lat,lng and x,y,z for the WGS84 datum is given exactly (although without guarantee of numerical stability) more as a reference than because of a true need. If the height is to be taken into account, or if a faster back-conversion is needed, please refer to the Wikipedia article.
The Epanechnikov kernel, besides being "more local" than the Gaussian and Trossian kernels, has the advantage of being the fastest for the second loop, which is O(ng), where g is the number of points of the local grid, and can also be employed in the first loop, which is O(n²), if n is big.
This can be solved by finding a jeopardy surface. See Rossmo's Formula.
This is the predator problem. Given a set of geographically-located carcasses, where is the lair of the predator? Rossmo's formula solves this problem.
Find the point with the largest density estimate.
Should be pretty much straightforward. Use a kernel radius that roughly covers a large airport in diameter. A 2D Gaussian or Epanechnikov kernel should be fine.
http://en.wikipedia.org/wiki/Multivariate_kernel_density_estimation
This is similar to computing a Heap Map: http://en.wikipedia.org/wiki/Heat_map
and then finding the brightest spot there. Except it computes the brightness right away.
For fun I read a 1% sample of the Geocoordinates of DBpedia (i.e. Wikipedia) into ELKI, projected it into 3D space and enabled the density estimation overlay (hidden in the visualizers scatterplot menu). You can see there is a hotspot on Europe, and to a lesser extend in the US. The hotspot in Europe is Poland I believe. Last I checked, someone apparently had created a Wikipedia article with Geocoordinates for pretty much any town in Poland. The ELKI visualizer, unfortunately, neither allows you to zoom in, rotate, or reduce the kernel bandwidth to visually find the most dense point. But it's straightforward to implement yourself; you probably also don't need to go into 3D space, but can just use latitudes and longitudes.
Kernel Density Estimation should be available in tons of applications. The one in R is probably much more powerful. I just recently discovered this heatmap in ELKI, so I knew how to quickly access it. See e.g. http://stat.ethz.ch/R-manual/R-devel/library/stats/html/density.html for a related R function.
On your data, in R, try for example:
library(kernSmooth)
smoothScatter(data, nbin=512, bandwidth=c(.25,.25))
this should show a strong preference for Chicago.
library(kernSmooth)
dens=bkde2D(data, gridsize=c(512, 512), bandwidth=c(.25,.25))
contour(dens$x1, dens$x2, dens$fhat)
maxpos = which(dens$fhat == max(dens$fhat), arr.ind=TRUE)
c(dens$x1[maxpos[1]], dens$x2[maxpos[2]])
yields [1] 42.14697 -88.09508, which is less than 10 miles from Chicago airport.
To get better coordinates try:
rerunning on a 20x20 miles area around the estimated coordinates
a non-binned KDE in that area
better bandwidth selection with dpik
higher grid resolution
in Astrophysics we use the so called "half mass radius". Given a distribution and its center, the half mass radius is the minimum radius of a circle that contains half of the points of your distribution.
This quantity is a characteristic length of a distribution of points.
If you want that the home of the helicopter is where the points are maximally concentrated so it is the point that has the minimum half mass radius!
My algorithm is as follows: for each point you compute this half mass radius centring the distribution in the current point. The "home" of the helicopter will be the point with the minimum half mass radius.
I've implemented it and the computed center is 42.149994 -88.133698 (which is in Chicago)
I've also used the 0.2 of the total mass instead of the 0.5(half) usually used in Astrophysics.
This is my (in python) alghorithm that finds the home of the helicopter:
import math
import numpy
def inside(points,center,radius):
ids=(((points[:,0]-center[0])**2.+(points[:,1]-center[1])**2.)<=radius**2.)
return points[ids]
points = numpy.loadtxt(open('points.txt'),comments='#')
npoints=len(points)
deltar=0.1
idcenter=None
halfrmin=None
for i in xrange(0,npoints):
center=points[i]
radius=0.
stayHere=True
while stayHere:
radius=radius+deltar
ninside=len(inside(points,center,radius))
#print 'point',i,'r',radius,'in',ninside,'center',center
if(ninside>=npoints*0.2):
if(halfrmin==None or radius<halfrmin):
halfrmin=radius
idcenter=i
print 'point',i,halfrmin,idcenter,points[idcenter]
stayHere=False
#print halfrmin,idcenter
print points[idcenter]
You can use DBSCAN for that task.
DBSCAN is a density based clustering with a notion of noise. You need two parameters:
First the number of points a cluster should have at minimum "minpoints".
And second a neighbourhood parameter called "epsilon" that sets a distance threshold to the surrounding points that should be included in your cluster.
The whole algorithm works like this:
Start with an arbitrary point in your set that hasn't been visited yet
Retrieve points from the epsilon neighbourhood mark all as visited
if you have found enough points in this neighbourhood (> minpoints parameter) you start a new cluster and assign those points. Now recurse into step 2 again for every point in this cluster.
if you don't have, declare this point as noise
go all over again until you've visited all points
It is really simple to implement and there are lots of frameworks that support this algorithm already. To find the mean of your cluster, you can simply take the mean of all the assigned points from its neighbourhood.
However, unlike the method that #TylerDurden proposes, this needs a parameterization- so you need to find some hand tuned parameters that fit your problem.
In your case, you can try to set the minpoints to 10% of your total points if the plane is likely to stay 10% of the time you track at an airport. The density parameter epsilon depends on the resolution of your geographic sensor and the distance metric you use- I would suggest the haversine distance for geographic data.
How about divide the map into many zones and then find the center of plane in zone with the most plane. Algorithm will be something like this
set Zones[40]
foreach Plane in Planes
Zones[GetZone(Plane.position)].Add(Plane)
set MaxZone = Zones[0]
foreach Zone in Zones
if MaxZone.Length() < Zone.Length()
MaxZone = Zone
set Center
foreach Plane in MaxZone
Center.X += Plane.X
Center.Y += Plane.Y
Center.X /= MaxZone.Length
Center.Y /= MaxZone.Length
All I have on this machine is an old compiler so I made an ASCII version of this. It "draws" (in ASCII) a map - dots are points, X is where the real source is, G is where the guessed source is. If the two overlap, only X is shown.
Examples (DIFFICULTY 1.5 and 3 respectively):
The points are generated by picking a random point as the source, then randomly distributing points, making them more likely to be closer to the source.
DIFFICULTY is a floating point constant that regulates the initial point generation - how much more likely the points are to be closer to the source - if it is 1 or less, the program should be able to guess the exact source, or very close. At 2.5, it should still be pretty decent. At 4+, it will start to guess worse, but I think it still guesses better than a human would.
It could be optimized by using binary search over X, then Y - this would make the guess worse, but would be much, much faster. Or by starting with larger blocks, then splitting the best block further (or the best block and the 8 surrounding it). For a higher resolution system, one of these would be necessary. This is quite a naive approach, though, but it seems to work well in an 80x24 system. :D
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
#define Y 24
#define X 80
#define DIFFICULTY 1 // Try different values...
static int point[Y][X];
double dist(int x1, int y1, int x2, int y2)
{
return sqrt((y1 - y2)*(y1 - y2) + (x1 - x2)*(x1 - x2));
}
main()
{
srand(time(0));
int y = rand()%Y;
int x = rand()%X;
// Generate points
for (int i = 0; i < Y; i++)
{
for (int j = 0; j < X; j++)
{
double u = DIFFICULTY * pow(dist(x, y, j, i), 1.0 / DIFFICULTY);
if ((int)u == 0)
u = 1;
point[i][j] = !(rand()%(int)u);
}
}
// Find best source
int maxX = -1;
int maxY = -1;
double maxScore = -1;
for (int cy = 0; cy < Y; cy++)
{
for (int cx = 0; cx < X; cx++)
{
double score = 0;
for (int i = 0; i < Y; i++)
{
for (int j = 0; j < X; j++)
{
if (point[i][j] == 1)
{
double d = dist(cx, cy, j, i);
if (d == 0)
d = 0.5;
score += 1000 / d;
}
}
}
if (score > maxScore || maxScore == -1)
{
maxScore = score;
maxX = cx;
maxY = cy;
}
}
}
// Print out results
for (int i = 0; i < Y; i++)
{
for (int j = 0; j < X; j++)
{
if (i == y && j == x)
printf("X");
else if (i == maxY && j == maxX)
printf("G");
else if (point[i][j] == 0)
printf(" ");
else if (point[i][j] == 1)
printf(".");
}
}
printf("Distance from real source: %f", dist(maxX, maxY, x, y));
scanf("%d", 0);
}
Virtual earth has a very good explanation of how you can do it relatively quick. They also have provided code examples. Please have a look at http://soulsolutions.com.au/Articles/ClusteringVirtualEarthPart1.aspx
A simple mixture model seems to work pretty well for this problem.
In general, to get a point that minimizes the distance to all other points in a dataset, you can just take the mean. In this case, you want to find a point that minimizes the distance from a subset of concentrated points. If you postulate that a point can either come from the concentrated set of points of interest or from a diffuse set of background points, then this gives a mixture model.
I have included some python code below. The concentrated area is modeled by a high-precision normal distribution and the background point are modeled by either a low-precision normal distribution or a uniform distribution over a bounding box on the dataset (there is a line of code that can be commented out to switch between these options). Also, mixture models can be somewhat unstable, so running the EM algorithm a few times with random initial conditions and choosing the run with the highest log-likelihood gives better results.
If you are actually looking at airplanes, then adding some sort of time dependent dynamics will probably improve your ability to infer the home base immensely.
I would also be wary of Rossimo's formula because it includes some pretty-strong assumptions about crime distributions.
#the dataset
sdata='''41.892694,-87.670898
42.056048,-88.000488
41.941744,-88.000488
42.072361,-88.209229
42.091933,-87.982635
42.149994,-88.133698
42.171371,-88.286133
42.23241,-88.305359
42.196811,-88.099365
42.189689,-88.188629
42.17646,-88.173523
42.180531,-88.209229
42.18168,-88.187943
42.185496,-88.166656
42.170485,-88.150864
42.150634,-88.140564
42.156743,-88.123741
42.118555,-88.105545
42.121356,-88.112755
42.115499,-88.102112
42.119319,-88.112411
42.118046,-88.110695
42.117791,-88.109322
42.182189,-88.182449
42.194145,-88.183823
42.189057,-88.196182
42.186513,-88.200645
42.180917,-88.197899
42.178881,-88.192062
41.881656,-87.6297
41.875521,-87.6297
41.87872,-87.636566
41.872073,-87.62661
41.868239,-87.634506
41.86875,-87.624893
41.883065,-87.62352
41.881021,-87.619743
41.879998,-87.620087
41.8915,-87.633476
41.875163,-87.620773
41.879125,-87.62558
41.862763,-87.608757
41.858672,-87.607899
41.865192,-87.615795
41.87005,-87.62043
42.073061,-87.973022
42.317241,-88.187256
42.272546,-88.088379
42.244086,-87.890625
42.044512,-88.28064
39.754977,-86.154785
39.754977,-89.648437
41.043369,-85.12207
43.050074,-89.406738
43.082179,-87.912598
42.7281,-84.572754
39.974226,-83.056641
38.888093,-77.01416
39.923692,-75.168457
40.794318,-73.959961
40.877439,-73.146973
40.611086,-73.740234
40.627764,-73.234863
41.784881,-71.367187
42.371988,-70.993652
35.224587,-80.793457
36.753465,-76.069336
39.263361,-76.530762
25.737127,-80.222168
26.644083,-81.958008
30.50223,-87.275391
29.436309,-98.525391
30.217839,-97.844238
29.742023,-95.361328
31.500409,-97.163086
32.691688,-96.877441
32.691688,-97.404785
35.095754,-106.655273
33.425138,-112.104492
32.873244,-117.114258
33.973545,-118.256836
33.681497,-117.905273
33.622982,-117.734985
33.741828,-118.092041
33.64585,-117.861328
33.700707,-118.015137
33.801189,-118.251343
33.513132,-117.740479
32.777244,-117.235107
32.707939,-117.158203
32.703317,-117.268066
32.610821,-117.075806
34.419726,-119.701538
37.750358,-122.431641
37.50673,-122.387695
37.174817,-121.904297
37.157307,-122.321777
37.271492,-122.033386
37.435238,-122.217407
37.687794,-122.415161
37.542025,-122.299805
37.609506,-122.398682
37.544203,-122.0224
37.422151,-122.13501
37.395971,-122.080078
45.485651,-122.739258
47.719463,-122.255859
47.303913,-122.607422
45.176713,-122.167969
39.566,-104.985352
39.124201,-94.614258
35.454518,-97.426758
38.473482,-90.175781
45.021612,-93.251953
42.417881,-83.056641
41.371141,-81.782227
33.791132,-84.331055
30.252543,-90.439453
37.421248,-122.174835
37.47794,-122.181702
37.510628,-122.254486
37.56943,-122.346497
37.593373,-122.384949
37.620571,-122.489319
36.984249,-122.03064
36.553017,-121.893311
36.654442,-121.772461
36.482381,-121.876831
36.15042,-121.651611
36.274518,-121.838379
37.817717,-119.569702
39.31657,-120.140991
38.933041,-119.992676
39.13785,-119.778442
39.108019,-120.239868
38.586082,-121.503296
38.723354,-121.289062
37.878444,-119.437866
37.782994,-119.470825
37.973771,-119.685059
39.001377,-120.17395
40.709076,-73.948975
40.846346,-73.861084
40.780452,-73.959961
40.778829,-73.958931
40.78372,-73.966012
40.783688,-73.965325
40.783692,-73.965615
40.783675,-73.965741
40.783835,-73.965873
'''
import StringIO
import numpy as np
import re
import matplotlib.pyplot as plt
def lp(l):
return map(lambda m: float(m.group()),re.finditer('[^, \n]+',l))
data=np.array(map(lp,StringIO.StringIO(sdata)))
xmn=np.min(data[:,0])
xmx=np.max(data[:,0])
ymn=np.min(data[:,1])
ymx=np.max(data[:,1])
# area of the point set bounding box
area=(xmx-xmn)*(ymx-ymn)
M_ITER=100 #maximum number of iterations
THRESH=1e-10 # stopping threshold
def em(x):
print '\nSTART EM'
mlst=[]
mu0=np.mean( data , 0 ) # the sample mean of the data - use this as the mean of the low-precision gaussian
# the mean of the high-precision Gaussian - this is what we are looking for
mu=np.random.rand( 2 )*np.array([xmx-xmn,ymx-ymn])+np.array([xmn,ymn])
lam_lo=.001 # precision of the low-precision Gaussian
lam_hi=.1 # precision of the high-precision Gaussian
prz=np.random.rand( 1 ) # probability of choosing the high-precision Gaussian mixture component
for i in xrange(M_ITER):
mlst.append(mu[:])
l_hi=np.log(prz)+np.log(lam_hi)-.5*lam_hi*np.sum((x-mu)**2,1)
#low-precision normal background distribution
l_lo=np.log(1.0-prz)+np.log(lam_lo)-.5*lam_lo*np.sum((x-mu0)**2,1)
#uncomment for the uniform background distribution
#l_lo=np.log(1.0-prz)-np.log(area)
#expectation step
zs=1.0/(1.0+np.exp(l_lo-l_hi))
#compute bound on the likelihood
lh=np.sum(zs*l_hi+(1.0-zs)*l_lo)
print i,lh
#maximization step
mu=np.sum(zs[:,None]*x,0)/np.sum(zs) #mean
lam_hi=np.sum(zs)/np.sum(zs*.5*np.sum((x-mu)**2,1)) #precision
prz=1.0/(1.0+np.sum(1.0-zs)/np.sum(zs)) #mixure component probability
try:
if np.abs((lh-old_lh)/lh)<THRESH:
break
except:
pass
old_lh=lh
mlst.append(mu[:])
return lh,lam_hi,mlst
if __name__=='__main__':
#repeat the EM algorithm a number of times and get the run with the best log likelihood
mx_prm=em(data)
for i in xrange(4):
prm=em(data)
if prm[0]>mx_prm[0]:
mx_prm=prm
print prm[0]
print mx_prm[0]
lh,lam_hi,mlst=mx_prm
mu=mlst[-1]
print 'best loglikelihood:', lh
#print 'final precision value:', lam_hi
print 'point of interest:', mu
plt.plot(data[:,0],data[:,1],'.b')
for m in mlst:
plt.plot(m[0],m[1],'xr')
plt.show()
You can easily adapt the Rossmo's formula, quoted by Tyler Durden to your case with few simple notes:
The formula :
This formula give something close to a probability of presence of the base operation for a predator or a serial killer. In your case it could give the probability of a base to be in a certain point. I'll explain later how to use it. U can write it this way :
Proba(base on point A)= Sum{on all spots} ( Phi/(dist^f)+(1-Phi)(B*(g-f))/(2B-dist)^g )
Using Euclidian distance
You want an Euclidian distance and not the Manhattan's one because an airplane or helicopter is not bound to road/streets. So using Euclidian distance is the correct way, if you are tracking an airplane & not a serial killer. So "dist" in the formula is the euclidian distance between the spot ur testing and the spot considered
Taking reasonable variable B
Variable B was used to represent the rule "reasonably smart killer will not kill his neighbor". In your case the will also applied because no one use an airplane/roflcopter to get to the next street corner. we can suppose that the minimal journey is for example 10km or anything reasonable when applied to your case.
Exponential factor f
Factor f is used to add a weight to the distance. For example if all the spots are in a small area you could want a big factor f because the probability of the airport/base/HQ will decrease fast if all your datapoint are in the same sector. g works in a similar way, allow to choose the size of "base is unlikely to be just next to the spot" area
Factor Phi :
Again this factor has to be determined using your knowledge of the problem. It permits to choose the most accurate factor between "base is close to spots" and "i'll not use the plane to make 5 m" if for example u think that the second one is almost irrelevent you can set Phi to 0.95 (0<Phi<1) If both are interesting phi will be around 0.5
How to implement it as something usefull :
First you want to divide your map into little squares : meshing the map ( just like invisal did) (the smaller the squares ,the more accurate the result (in general)) then using the formula to find the more probable location. In fact the mesh is just an array with all possible locations. (if u want to be accurate you increase the number of possible spots but it will require more computational time and PhP is not well-known for it's amazing speed)
Algorithm :
//define all the factors you need(B , f , g , phi)
for(i=0..mesh_size) // computing the probability of presence for each square of the mesh
{
P(i)=0;
geocode squarePosition;//GeoCode of the square's center
for(j=0..geocodearray_size)//sum on all the known spots
{
dist=Distance(geocodearray[j],squarePosition);//small function returning distance between two geocodes
P(i)+=(Phi/pow(dist,f))+(1-Phi)*pow(B,g-f)/pow(2B-dist,g);
}
}
return geocode corresponding to max(P(i))
Hope that it will help you
First I would like to express my fondness of your method in illustrating and explaining the problem ..
If I were in your shoes, I would go for a density based algorithm like DBSCAN
and then after clustering the areas and removing the noise points a few areas (choices) will remain .. then I'll take the cluster with the highest density of points and calculate the average point and find the nearest real point to it . done, found the place! :).
Regards,
Why not something like this:
For each point, calculate it's distance from all other points and sum the total.
The point with the smallest sum is your center.
Maybe sum isn't the best metric to use. Possibly the point with the most "small distances"?
Sum over the distances. Take the point with the smallest summed distance.
function () {
for i in points P:
S[i] = 0
for j in points P:
S[i] += distance(P[i], P[j])
return min(S);
}
You can take a minimum spanning tree and remove the longest edges. The smaller trees give you the centeroid to lookup. The algorithm name is single-link k-clustering. There is a post here: https://stats.stackexchange.com/questions/1475/visualization-software-for-clustering.

Implementing a perceptron with backpropagation algorithm

I am trying to implement a two-layer perceptron with backpropagation to solve the parity problem. The network has 4 binary inputs, 4 hidden units in the first layer and 1 output in the second layer. I am using this for reference, but am having problems with convergence.
First, I will note that I am using a sigmoid function for activation, and so the derivative is (from what I understand) the sigmoid(v) * (1 - sigmoid(v)). So, that is used when calculating the delta value.
So, basically I set up the network and run for just a few epochs (go through each possible pattern -- in this case, 16 patterns of input). After the first epoch, the weights are changed slightly. After the second, the weights do not change and remain so no matter how many more epochs I run. I am using a learning rate of 0.1 and a bias of +1 for now.
The process of training the network is below in pseudocode (which I believe to be correct according to sources I've checked):
Feed Forward Step:
v = SUM[weight connecting input to hidden * input value] + bias
y = Sigmoid(v)
set hidden.values to y
v = SUM[weight connecting hidden to output * hidden value] + bias
y = Sigmoid(v)
set output value to y
Backpropagation of Output Layer:
error = desired - output.value
outputDelta = error * output.value * (1 - output.value)
Backpropagation of Hidden Layer:
for each hidden neuron h:
error = outputDelta * weight connecting h to output
hiddenDelta[i] = error * h.value * (1 - h.value)
Update Weights:
for each hidden neuron h connected to the output layer
h.weight connecting h to output = learningRate * outputDelta * h.value
for each input neuron x connected to the hidden layer
x.weight connecting x to h[i] = learningRate * hiddenDelta[i] * x.value
This process is of course looped through the epochs and the weight changes persist. So, my question is, are there any reasons that the weights remain constant after the second epoch? If necessary I can post my code, but at the moment I am hoping for something obvious that I'm overlooking. Thanks all!
EDIT: Here are the links to my code as suggested by sarnold:
MLP.java: http://codetidy.com/1903
Neuron.java: http://codetidy.com/1904
Pattern.java: http://codetidy.com/1905
input.txt: http://codetidy.com/1906
I think I spotted the problem; funny enough, what I found is visible in your high-level description, but I only found what looked odd in the code. First, the description:
for each hidden neuron h connected to the output layer
h.weight connecting h to output = learningRate * outputDelta * h.value
for each input neuron x connected to the hidden layer
x.weight connecting x to h[i] = learningRate * hiddenDelta[i] * x.value
I believe the h.weight should be updated with respect to the previous weight. Your update mechanism sets it based only on the learning rate, the output delta, and the value of the node. Similarly, the x.weight is also being set based on the learning rate, the hidden delta, and the value of the node:
/*** Weight updates ***/
// update weights connecting hidden neurons to output layer
for (i = 0; i < output.size(); i++) {
for (Neuron h : output.get(i).left) {
h.weights[i] = learningRate * outputDelta[i] * h.value;
}
}
// update weights connecting input neurons to hidden layer
for (i = 0; i < hidden.size(); i++) {
for (Neuron x : hidden.get(i).left) {
x.weights[i] = learningRate * hiddenDelta[i] * x.value;
}
}
I do not know what the correct solution is; but I have two suggestions:
Replace these lines:
h.weights[i] = learningRate * outputDelta[i] * h.value;
x.weights[i] = learningRate * hiddenDelta[i] * x.value;
with these lines:
h.weights[i] += learningRate * outputDelta[i] * h.value;
x.weights[i] += learningRate * hiddenDelta[i] * x.value;
(+= instead of =.)
Replace these lines:
h.weights[i] = learningRate * outputDelta[i] * h.value;
x.weights[i] = learningRate * hiddenDelta[i] * x.value;
with these lines:
h.weights[i] *= learningRate * outputDelta[i];
x.weights[i] *= learningRate * hiddenDelta[i];
(Ignore the value and simply scale the existing weight. The learning rate should be 1.05 instead of .05 for this change.)

Resources