I have created the following script to generate "fireworks" (term used loosely) as directed my tutor. It is based on skeleton script that I MUST use.
There are spinner to control certain parameters, however for the life of me I can not figure out what is wrong! Please help me!!!
The task was to animate via script, a sphere that went up, where the faces "exploded" and changed direction downwards via a time scale.
(firework goes up, explodes, and debris fall to earth)
I believe what I have written is correct, however no animation is actually generated. Despite my BEST effort and months of hard work.
I am new to coding, and did not sign up for any coding. All I wanted to do was to learn how to model. However my tutor holds the key to my pass.
At this present time I am ready to "rage quit" and am simply going out of my mine. ANY help would be greatly appreciated.
Here is the script I have written thus far...
-- Empty varables to be called in or adjusted by spinners
numOfSpheres = 0 --number of exploding spheres
sphereRadius = 0
creationBoxSize = 0 -- spawn radius in what box size?
objs = #() -- objs created go to empty array!
maxHeight = 0
startTime = 0
dropping = false --boolean flag to trigger the falling sphere stage
animate on
(
for i = 1 to numOfSpheres by 1 do
(
at time (startTime + 50.000) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 0.0000001 0 )
at time (startTime + 50.001) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 0.0000001 0 )
at time (startTime + 100) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 100 0 )
at time (startTime) ( objs[i].pos = objs[i].pos )
local height = maxHeight
for t = 5 to 100 by 5 do
(
height = height - 2
at time (startTime + t) ( objs[i].pos = objs[i].pos + [ 0, 0, height] )
)
)
)
rollout FireWorks "FireWorks"
( -- button ui code starts here
label lbl1 "By Luke Fahy" style_sunkenedge:true width:67 height:17
spinner count "FireWorks: " type:#integer range:[1,100,10] -- 1 to a 100: default value of 10: integer = whole number not fraction!
spinner size "Size: " type:#float range:[0.1,30,1]
spinner sPlane "Spawn Plane: " type:#integer range:[1,200,30]
spinner maxHeightSpinner "Max Height: " type:#float range:[1,100,14]
spinner startTimeSpinner "Start Time: " type:#float range:[1,100,1]
button create "Create Fireworks"
on create pressed do -- Event handler code start
(
numOfSpheres = count.value
sphereRadius = size.value
creationBoxSize = sPlane.value
maxHeight = maxHeightSpinner.value
startTime = startTimeSpinner.value
dropping = false
--create spheres
for i = 1 to numOfSpheres by 1 do
(
objs[i] = sphere radius:sphereRadius
local r = random 0 255
local g = random 0 255
local b = random 0 255
objs[i] .wirecolor = (color r g b)
objs[i] .pos = [random creationBoxSize -creationBoxSize, random creationBoxSize -creationBoxSize, 0] -- random possative and negative x/y values to create spawn box.
convertToMesh objs[i]
meshOp.explodeAllFaces objs[i] 0
)
)
) -- button ui code end
createDialog FireWorks width:200
Also, any suggestion for my comments and indentation would be appreciated. (formatting seems to have been thrown out of the window upon cut and past, my apologies I'm new to this forum. File is as attachment)
Kind regards,
ShineSmith
:)
Additional Info:
Further more, here is the script my tutor gave me to base my work on.
(object array required here, or name of singular object. Others used the $name of a single sphere and instances of it to replicate multiple "fireworks")
convertToMesh $
meshop.explodeAllFaces $ 0
update $
v = 20
animate on
(
at time (sliderTime+50.000) ( meshop.bevelFaces $ #{1..$.numFaces} 0.0000001 0 )
at time (sliderTime+50.001) ( meshop.bevelFaces $ #{1..$.numFaces} 0.0000001 0 )
at time (sliderTime+100) ( meshop.bevelFaces $ #{1..$.numFaces} 100 0 )
at time (sliderTime) ( $.pos = $.pos )
for t = 5 to 100 by 5 do
(
v = v - 2
at time (sliderTime+t) ( $.pos = $.pos + [0,0,v] )
)
)
You are running the animation code before any spheres are created and before your Rollout is even created. Move the animate on (...) block inside your on create handler, or make it a separate function.
Move the antimation code into a function:
fn doanimate =
(
animate on
(
for i = 1 to numOfSpheres by 1 do
(
at time (startTime + 50.000) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 0.0000001 0 )
at time (startTime + 50.001) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 0.0000001 0 )
at time (startTime + 100) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 100 0 )
at time (startTime) ( objs[i].pos = objs[i].pos )
local height = maxHeight
for t = 5 to 100 by 5 do
(
height = height - 2
at time (startTime + t) ( objs[i].pos = objs[i].pos + [ 0, 0, height] )
)
)
)
)
Call the function at the end where you press the button:
on create pressed do -- Event handler code start
(
numOfSpheres = count.value
sphereRadius = size.value
creationBoxSize = sPlane.value
maxHeight = maxHeightSpinner.value
startTime = startTimeSpinner.value
dropping = false
--create spheres
for i = 1 to numOfSpheres by 1 do
(
objs[i] = sphere radius:sphereRadius
local r = random 0 255
local g = random 0 255
local b = random 0 255
objs[i] .wirecolor = (color r g b)
objs[i] .pos = [random creationBoxSize -creationBoxSize, random creationBoxSize -creationBoxSize, 0] -- random possative and negative x/y values to create spawn box.
convertToMesh objs[i]
meshOp.explodeAllFaces objs[i] 0
)
doanimate()
)
The full script would now look like this:
-- Empty varables to be called in or adjusted by spinners
numOfSpheres = 0 --number of exploding spheres
sphereRadius = 0
creationBoxSize = 0 -- spawn radius in what box size?
objs = #() -- objs created go to empty array!
maxHeight = 0
startTime = 0
dropping = false --boolean flag to trigger the falling sphere stage
fn doanimate =
(
animate on
(
for i = 1 to numOfSpheres by 1 do
(
at time (startTime + 50.000) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 0.0000001 0 )
at time (startTime + 50.001) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 0.0000001 0 )
at time (startTime + 100) ( meshop.bevelFaces objs[i] #{1..objs[i].numFaces} 100 0 )
at time (startTime) ( objs[i].pos = objs[i].pos )
local height = maxHeight
for t = 5 to 100 by 5 do
(
height = height - 2
at time (startTime + t) ( objs[i].pos = objs[i].pos + [ 0, 0, height] )
)
)
)
)
rollout FireWorks "FireWorks"
( -- button ui code starts here
label lbl1 "By Luke Fahy" style_sunkenedge:true width:67 height:17
spinner count "FireWorks: " type:#integer range:[1,100,10] -- 1 to a 100: default value of 10: integer = whole number not fraction!
spinner size "Size: " type:#float range:[0.1,30,1]
spinner sPlane "Spawn Plane: " type:#integer range:[1,200,30]
spinner maxHeightSpinner "Max Height: " type:#float range:[1,100,14]
spinner startTimeSpinner "Start Time: " type:#float range:[1,100,1]
button create "Create Fireworks"
on create pressed do -- Event handler code start
(
numOfSpheres = count.value
sphereRadius = size.value
creationBoxSize = sPlane.value
maxHeight = maxHeightSpinner.value
startTime = startTimeSpinner.value
dropping = false
--create spheres
for i = 1 to numOfSpheres by 1 do
(
objs[i] = sphere radius:sphereRadius
local r = random 0 255
local g = random 0 255
local b = random 0 255
objs[i] .wirecolor = (color r g b)
objs[i] .pos = [random creationBoxSize -creationBoxSize, random creationBoxSize -creationBoxSize, 0] -- random possative and negative x/y values to create spawn box.
convertToMesh objs[i]
meshOp.explodeAllFaces objs[i] 0
)
doanimate()
)
) -- button ui code end
createDialog FireWorks width:200
Related
I'm looking for a way/method to fit my response data (Image is shown below). So using f(t) = (square(2*pi*f*t)+1) to filter my raw data. However, cftool don't recognize this kind of function. So please help me thanks!
The function below might allow to fit the data. It is continuous, but not differentiable everywhere. The steps tend to fall to the right, while OPs data does not. This might require some extra work. Moreover, steps have to be equidistant, which, however, seems to be the case.
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import numpy as np
def f( x, a, b ): # test function (that would be the one to fit, actually + a shift of edge position)
return a + b * x**3
def f_step( x, l, func, args=None ):
y = ( x - l / 2. ) % l - l / 2.
y = y / l * 2.
p = np.floor( ( x-l/2.) / (l) ) + 1
centre = p * l
left = centre - l / 2.
right = centre + l / 2.
fL = func( left, *args )
fR = func( right, *args )
fC = func( centre, *args )
out = fC + sharp( y , fL - fC, fR - fC , 5 )
return out
def sharp( x, a, b , p, epsilon=1e-1 ):
out = a * ( 1. / abs( x + 1 + epsilon )**p - ( 2 + epsilon)**( -p ) ) / ( epsilon**( -p ) - ( 2 + epsilon )**( -p ) )
out += b * ( 1. /abs( x - 1 - epsilon )**p - ( 2 + epsilon)**( -p ) ) / ( epsilon**( -p ) - ( 2 + epsilon )**( -p ) )
return out
l=0.57
xList = np.linspace( -1, 1.75, 500 )
yList = [ f_step( x, l, f, args=(2, -.3 ) ) for x in xList ]
fig1 = plt.figure( 1 )
ax = fig1.add_subplot( 1, 1, 1 )
ax.plot( xList, yList )
ax.plot( xList, f(xList, 2,-.3) )
plt.show()
Looks like:
I have a data, which consists of a number of chunks. I now that they come from some continuous curve, but later were shifted in the y-direction. Now I want to shift them back to estimate original curve. Some parts are not shifted, but just absent. To clarify the situation dummy code to generate something similar is below (Matlab):
%% generate some dummy data
knots = rand(10,2);
% fix starting and stop points
knots = [[0,rand()];knots;[1,rand()]];
% sort knots
knots=unique(knots,'rows');
% generate dummy curve
dummyX = linspace(0,1,10^4);
dummyY = interp1(knots(:,1),knots(:,2),dummyX,'spline');
figure()
subplot(2,1,1)
plot(dummyX,dummyY)
%% Add offset and wipe some parts
% get borders of chunks
borders = [1;randi([1,numel(dummyX)],20,1);numel(dummyX)];
borders = unique(borders);
borders = [borders(1:end-1)+1,borders(2:end)];
borders(1) = 1;
% add ofsets or nans
offset = (rand(size(borders,1),1)-0.5)*5;
offset(randperm(numel(offset),floor(size(borders,1)/3)))=nan;
for iBorder = 1:size(borders,1)
idx = borders(iBorder,1): borders(iBorder,2);
dummyY(idx)=dummyY(idx)+offset(iBorder);
dummyY(idx([1,end]))=nan;
end
subplot(2,1,2)
plot(dummyX,dummyY)
Original curve is on top, shifted on the bottom. I try to shift chunks pairwise, minimizing the length of the cubic spline, but it did not work for me. I understand, that it is impossible to obtain absolutely same curve (I may lose some peaks).
Could You help me to find the best shifts?
I had several ideas for this and played with overall curvature, arc length, etc. as well as mixed combinations. Turned out that a simple chi**2 works best. So it goes as simple as this:
Get some knots to fit every chunk with a given precision by splines
join everything
reduce knots to avoid very close knots in touching sets, those can result in large curvature.
use leastsq fit on entire set with splines on joined and reduced set of knots to find shifts.
In theory one could play with / modify:
spline order
min knot density
max knot density
how adjacent sets are dealt with
adding a knot to a large gap
etc.
(Note: In some random data the splrev produced error messages. As those are mostly not very helpful, I can only say that this code is not 100% robust.)
Code is as follows
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp1d, splrep, splev
from scipy.optimize import fmin, leastsq
def reduce_knots( inList, dist ):
outList=[]
addList=[]
for i in inList:
try:
if abs( i - addList[ -1 ] ) < dist:
addList += [ i ]
else:
outList += [ addList ]
addList = [ i ]
except IndexError:### basically the first
addList = [ i]
outList += [ addList ]
return [ sum( x ) / len( x ) for x in outList ]
def adaptive_knots( inX, inY, thresh=.005 ):
ll = len( inX )
sup = ll - 4
assert sup > 3
nN = 3
test = True
while test:
testknots = np.linspace( 1, len( inX ) - 2, nN, dtype=np.int )
testknots = [ inX[ x ] for x in testknots ]
myTCK= splrep( inX , inY, t=testknots )
newY = splev( inX , myTCK )
chi2 = np.sum( ( newY - inY )**2 ) / ll
if chi2 > thresh:
nN += 1
if nN > sup:
test = False
else:
test = False
return testknots
def global_residuals( shiftList, xBlocks, yBlocks, allTheKnots ):# everything shifted (1 is redundant by global offset) Blocks must be ordered an np.arrays
localYBlocks = [ s + yList for s, yList in zip( shiftList, yBlocks ) ]
allTheX = np.concatenate( xBlocks )
allTheY = np.concatenate( localYBlocks )
tck = splrep( allTheX, allTheY, t=allTheKnots )
yList = splev( allTheX, tck )
diff = yList - allTheY
return diff
#~ np.random.seed( 28561 )
np.random.seed( 5561 )
#~ np.random.seed( 733437 )
### python way for test data
knots = np.random.rand( 8, 2 )
knots = np.array( sorted( [ [ 0, np.random.rand() ] ] + list( knots ) + [ [ 1, np.random.rand() ] ], key=lambda x: x[ 0 ] ) )
dummyX = np.linspace( 0, 1, 3e4 )
f = interp1d( knots[ :, 0 ], knots[ :, 1 ], 'cubic' )
dummyY = np.fromiter( ( f( x ) for x in dummyX ), np.float )
chunk = np.append( [ 0 ], np.append( np.sort( np.random.randint( 7, high=len( dummyX ) - 10 , size= 10, dtype=np.int ) ), len( dummyX ) ) )
xDataDict = dict()
yDataDict = dict()
allX = np.array( [] )
allY = np.array( [] )
allK = np.array( [] )
allS = []
for i, val in enumerate(chunk[ : -1 ] ):
if np.random.rand() < .75: ## 25% of not appearing
xDataDict[ i ] = dummyX[ val:chunk[ i + 1 ] ]
realShift = 1.5 * ( 1 - 2 * np.random.rand() )
allS += [ realShift ]
yDataDict[ i ] = dummyY[ val:chunk[ i + 1 ] ] + realShift
yDataDict[ i ] = np.fromiter( ( np.random.normal( scale=.05, loc=y ) for y in yDataDict[ i ] ), np.float )
allX = np.append( allX, xDataDict[ i ] )
allY = np.append( allY, yDataDict[ i ] )
### Plotting
fig = plt.figure()
ax = fig.add_subplot( 3, 1, 1 )
ax.plot( knots[ :, 0 ],knots[ :, 1 ], ls='', c='r', marker='o')
ax.plot( dummyX , dummyY, '--' )
for key in xDataDict.keys():
ax.plot(xDataDict[ key ], yDataDict[ key ] )
myKnots = adaptive_knots( xDataDict[ key ], yDataDict[ key ] )
allK = np.append( allK, myKnots )
myTCK = splrep( xDataDict[ key ], yDataDict[ key ], t=myKnots )
ax.plot( xDataDict[ key ], splev( xDataDict[ key ] , myTCK ) )
myTCK = splrep( allX, allY, t=allK )
ax.plot( allX, splev( allX, myTCK ) )
for x in allK:
ax.axvline( x=x, linestyle=':', color='#AAAAAA', linewidth=1 )
### now fitting
myXBlockList = []
myYBlockList = []
for key in sorted( xDataDict.keys() ):
myXBlockList += [ xDataDict[ key ] ]
myYBlockList += [ yDataDict[ key ] ]
#start values
s = [ 0 ]
for i,y in enumerate( myYBlockList[ :-1 ] ):
ds = myYBlockList[ i + 1 ][ 0 ] - y[ -1 ]
s += [ -ds ]
startShift = np.cumsum( s )
allK = reduce_knots( allK, .01 )
sol, ierr = leastsq( global_residuals, x0=startShift, args=( myXBlockList, myYBlockList, allK ), maxfev=10000 )
sol = np.array(sol) - sol[ 0 ]
print "solution: ", -sol
print "real: ", np.array( allS ) - allS[ 0 ]
### Plotting solutions
bx = fig.add_subplot( 3, 1, 3, sharex=ax )
for x, y, s in zip( myXBlockList, myYBlockList, sol ):
bx.plot( x, y + s )
localYBlocks = [ s + yList for s,yList in zip( sol, myYBlockList ) ]
allTheX = np.concatenate( myXBlockList )
allTheY = np.concatenate( localYBlocks )
tck = splrep( allTheX, allTheY, t=allK )
dx = allTheX[ 1 ] - allTheX[ 0 ]
testX = np.arange( allTheX[ 0 ], allTheX[ -1 ], dx )
finalyList = splev( testX, tck)
bx.plot( testX, finalyList , 'k--' )
mean = sum( dummyY ) / len( dummyY ) - sum( finalyList ) / len( finalyList )
bx.plot( dummyX, dummyY - mean, '--' )
for x in allK:
bx.axvline( x=x, linestyle=':', color='#AAAAAA', linewidth=1 )
cx = fig.add_subplot( 3, 1, 2, sharex=ax )
for x, y, s in zip( myXBlockList, myYBlockList, startShift ):
cx.plot( x, y + s )
plt.show()
For small gaps this works nicely on the test data
The upper graph shows the original spline and its knots as red dots. This generated the data. Moreover, it shows the noisy shifted chunks, the initial fitting knots as vertical lines and an according spline fit.
Mid graph shows the chunks shifted by the pre-calculated start values - aligned ends.
Lower graph shows original spline, fitted spline, reduced knot positions, and chunks shifted according to the fit solution.
Naturally, the larger the gaps the more the solution deviates from the original
...but still quite good.
I've read the HSL to RGB algorithm in wikipedia. I understand it and can convert using it. However I came upon another algorithm here, and the math is "explained" here.
The algorithm is:
//H, S and L input range = 0 ÷ 1.0
//R, G and B output range = 0 ÷ 255
if ( S == 0 )
{
R = L * 255
G = L * 255
B = L * 255
}
else
{
if ( L < 0.5 ) var_2 = L * ( 1 + S )
else var_2 = ( L + S ) - ( S * L )
var_1 = 2 * L - var_2
R = 255 * Hue_2_RGB( var_1, var_2, H + ( 1 / 3 ) )
G = 255 * Hue_2_RGB( var_1, var_2, H )
B = 255 * Hue_2_RGB( var_1, var_2, H - ( 1 / 3 ) )
}
Hue_2_RGB( v1, v2, vH ) //Function Hue_2_RGB
{
if ( vH < 0 ) vH += 1
if( vH > 1 ) vH -= 1
if ( ( 6 * vH ) < 1 ) return ( v1 + ( v2 - v1 ) * 6 * vH )
if ( ( 2 * vH ) < 1 ) return ( v2 )
if ( ( 3 * vH ) < 2 ) return ( v1 + ( v2 - v1 ) * ( ( 2 / 3 ) - vH ) * 6)
return ( v1 )
}
I've tried following the math but I can't figure it. How does it work?
The first part if ( S == 0 ) is for the case that there is no Saturation it means that it’s a shade of grey. You set the Luminance, set RGB to that grey scale level and you are done.
If this is not the case, then we need to perform the tricky part:
We shall use var_1 and var_2 as temporary values, only for making the code more readable.
So, if Luminance is smaller then 0.5 (50%) then var_2 = Luminance x (1.0 + Saturation.
If Luminance is equal or larger then 0.5 (50%) then var_2 = Luminance + Saturation – Luminance x Saturation. That's the else part of:
if ( L < 0.5 ) var_2 = L * ( 1 + S )
else var_2 = ( L + S ) - ( S * L )
Then we do:
var1 = 2 x Luminance – var_2
which is going to be useful later.
Now we need another three temporary variables for each color channel, as far as Hue is conserned. For Red, we add 0.333 to it (H + (1/3) in code), for Green we do nothing, and for Blue, we subtract 0.333 from it (H + (1/3)). That temporaty value is called vH (value Hue) in Hue_2_RGB().
Now each color channel will be treated separetely, thus the three function calls. There are four formulas that can be applied to a color channel. Every color channel should "use" only one formula.
Which one? It depends on the value of Hue (vH).
By the way, the value of vH must be normalized, thus if it's negative we add 1, or if it's greater than 1, we subtract 1 from it, so that vH lies in [0, 1].
If 6 x vH is smaller then 1, Color channel = var_1 + (var_2
– var_1) x 6 x vH
If 2 x vH is smaller then 1, Color channel = var_2
If 3 x vH is smaller then 2, Color channel = var_1 + (var_2 – var_1)
x (0.666 – vH) x 6
Else, Color channel = var_1
For R = 255 * Hue_2_RGB( var_1, var_2, H + ( 1 / 3 ) ), the Color Channel would be the Red, named R in the code.
I am compiling into Python 3(.4.4) and have generated a program that is 250,000 lines long. When I tried running it, Python crashed: Windows (10) reported "python.exe has stopped working". Shorter versions of the "same" output run OK so I'm wondering if the problem is that my program is too long and if so, how I can increase the limit?
Please note that I am not interested in "solutions" where my output is factored into smaller chunks. A monolithic output file is the part of the problem specification.
Here is what the program looks like:
import os, sys
from random import randint, seed
from datetime import datetime
DEAD = '_'
ALIVE = '1'
cells = [] # Will be an array of max_row+2 rows each of max_col+2 columns.
# Create initial population of cells
seed( 1.3 )
def repeat_run( max_run ):
print( '%20s %20s %20s' % ( 'Time', 'Rate', 'Density' ) )
for run in range( max_run ):
blank_row = [ DEAD for col in range( 152 ) ]
for row in range( 152 ):
cells.append( blank_row.copy() )
pop = 0
for row in range( 1, 152-1 ):
for col in range( 1, 152-1 ):
if randint( 0, 1 ) == 0:
cells[ row ][ col ] = ALIVE
pop += 1
time, rate, density = simulate( cells, pop )
print( '%20.5f %20.5f %20.5f' % ( time, rate, density ) )
print()
def num_neighb( row, col ):
count = 0
for col_inc in range( -1, 2 ):
x = col + col_inc
for row_inc in range( -1, 2 ):
y = row + row_inc
if cells[ y ][ x ] == ALIVE:
count += 1
return count
def simulate( cells, pop ):
# Global tally of all cells that ever lived (for calculating average
# density over the entire run).
grand_total = pop
start = datetime.now()
for gen in range( 10 ):
pop = 0 # Number of live cells in next generation
# Initialise next generation of cells
next_gen = [ [ DEAD for col in range( 152 ) ] for col in range( 152 ) ]
# Apply birth/death rules
nn = num_neighb( 1, 1 )
if cells[ 1 ][ 1 ] == DEAD:
if nn == 3:
next_gen[ 1 ][ 1 ] = ALIVE
pop += 1
else:
if nn == 3 or nn == 4:
next_gen[ 1 ][ 1 ] = ALIVE
pop += 1
# 250,000 lines later ...
nn = num_neighb( 150, 150 )
if cells[ 150 ][ 150 ] == DEAD:
if nn == 3:
next_gen[ 150 ][ 150 ] = ALIVE
pop += 1
else:
if nn == 3 or nn == 4:
next_gen[ 150 ][ 150 ] = ALIVE
pop += 1
grand_total += pop
# Copy next_gen to cells
for col in range( 152 ):
for row in range( 152 ):
cells[ row ][ col ] = next_gen[ row ][ col ]
end = datetime.now()
delta = ( end - start ).total_seconds()
return delta, 231040 / delta, grand_total / 231040
repeat_run( 10 )
The full program is available here.
Thanks for your thoughts.
Mailing list claims:
So you can have 600KB of source code or 350KB of pyc without any
trouble whatsoever.
At the same time, number of locals (and thus function arguments) is capped. Likewise indentation depth is capped too. Limits are 255 and 100 respectively, and you'd get a nice error if you exceed those.
Given a grid (or table) with x*y cells. Each cell contains a value. Most of these cells have a value of 0, but there may be a "hot spot" somewhere on this grid with a cell that has a high value. The neighbours of this cell then also have a value > 0. As farer away from the hot spot as lower the value in the respective grid cell.
So this hot spot can be seen as the top of a hill, with decreasing values the farer we are away from this hill. At a certain distance the values drop to 0 again.
Now I need to determine the cell within the grid that represents the grid's center of gravity. In the simple example above this centroid would simply be the one cell with the highest value. However it's not always that simple:
the decreasing values of neighbour cells around the hot spot cell may not be equally distributed, or a "side of the hill" may fall down to 0 sooner than another side.
there is another hot spot/hill with values > 0 elsewehere within the grid.
I could think that this is kind of a typical problem. Unfortunately I am no math expert so I don't know what to search for (at least I have not found an answer in Google).
Any ideas how can I solve this problem?
Thanks in advance.
You are looking for the "weighted mean" of the cell values. Assuming each cell has a value z(x,y), then you can do the following
zx = sum( z(x, y) ) over all values of y
zy = sum( z(x, y) ) over all values of x
meanX = sum( x * zx(x)) / sum ( zx(x) )
meanY = sum( y * zy(y)) / sum ( zy(y) )
I trust you can convert this into a language of your choice...
Example: if you know Matlab, then the above would be written as follows
zx = sum( Z, 1 ); % sum all the rows
zy = sum( Z, 2 ); % sum all the columns
[ny nx] = size(Z); % find out the dimensions of Z
meanX = sum((1:nx).*zx) / sum(zx);
meanY = sum((1:ny).*zy) / sum(zy);
This would give you the meanX in the range 1 .. nx : if it's right in the middle, the value would be (nx+1)/2. You can obviously scale this to your needs.
EDIT: one more time, in "almost real" code:
// array Z(N, M) contains values on an evenly spaced grid
// assume base 1 arrays
zx = zeros(N);
zy = zeros(M);
// create X profile:
for jj = 1 to M
for ii = 1 to N
zx(jj) = zx(jj) + Z(ii, jj);
next ii
next jj
// create Y profile:
for ii = 1 to N
for jj = 1 to M
zy(ii) = zy(ii) + Z(ii, jj);
next jj
next ii
xsum = 0;
zxsum = 0;
for ii = 1 to N
zxsum += zx(ii);
xsum += ii * zx(ii);
next ii
xmean = xsum / zxsum;
ysum = 0;
zysum = 0;
for jj = 1 to M
zysum += zy(jj);
ysum += jj * zy(ii);
next jj
ymean = ysum / zysum;
This Wikipedia entry may help; the section entitled "A system of particles" is all you need. Just understand that you need to do the calculation once for each dimension, of which you apparently have two.
And here is a complete Scala 2.10 program to generate a grid full of random integers (using dimensions specified on the command line) and find the center of gravity (where rows and columns are numbered starting at 1):
object Ctr extends App {
val Array( nRows, nCols ) = args map (_.toInt)
val grid = Array.fill( nRows, nCols )( util.Random.nextInt(10) )
grid foreach ( row => println( row mkString "," ) )
val sum = grid.map(_.sum).sum
val xCtr = ( ( for ( i <- 0 until nRows; j <- 0 until nCols )
yield (j+1) * grid(i)(j) ).sum :Float ) / sum
val yCtr = ( ( for ( i <- 0 until nRows; j <- 0 until nCols )
yield (i+1) * grid(i)(j) ).sum :Float ) / sum
println( s"Center is ( $xCtr, $yCtr )" )
}
You could def a function to keep the calculations DRYer, but I wanted to keep it as obvious as possible. Anyway, here we run it a couple of times:
$ scala Ctr 3 3
4,1,9
3,5,1
9,5,0
Center is ( 1.8378378, 2.0 )
$ scala Ctr 6 9
5,1,1,0,0,4,5,4,6
9,1,0,7,2,7,5,6,7
1,2,6,6,1,8,2,4,6
1,3,9,8,2,9,3,6,7
0,7,1,7,6,6,2,6,1
3,9,6,4,3,2,5,7,1
Center is ( 5.2956524, 3.626087 )