Related
I am creating a simple math game in hexagon geometry through Unity. It is not about Unity indeed.
I borrow the Image from https://catlikecoding.com/unity/tutorials/, to illustrate the problem, it is quite huge so I put it in the link.
Background
Same as tutorial in the link provided, I use an array for saving data, to simplify it, it is like:
[ 0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0,
0, 0, 0, 0 ]
My aim is
Check if one or more than one grids are surrounded by another type of grid.
Definition
Surrounded means for a grid or group of connected grid, all neighbors are in different flag of them.
For example,
[ 0, 1, 1, 0,
1, 0, 1, 0,
0, 1, 1, 0,
0, 0, 0, 0,
0, 0, 0, 0 ]
//Should become
[ 0, 1, 1, 0,
1, 1, 1, 0,
0, 1, 1, 0,
0, 0, 0, 0,
0, 0, 0, 0 ]
It is pretty easy for this case I don't even need an algorithm, as I can create the grid with reference of its neighbor, like
class grid{
grid[] neighbor;
int flag; //0 or 1
}
So, when I need to check if a grid is surrounded, I just need to loop its neighbor.
Problem
However, this method become tedious in the following case
[ 0, 1, 1, 1,
1, 0, 0, 1,
0, 1, 1, 1,
0, 0, 0, 0,
0, 0, 0, 0 ]
So, I also need to check its neighbor's neighbor now, like
foreach (grid i in neighbor){
bool is_surrounded = false;
if (grid.flag == 1) {
//Good
} else {
//Check its neighbor, if every neighbor except i is 1, then return True.
}
}
It is working fine for 2, but what if there is 3 blank grids. Recursion is not ok, as when a grid is not surrounded like
[ 0, 1, 1, 1,
1, 0, 0, 1,
0, 1, 0, 1,
0, 0, 0, 0,
0, 0, 0, 0 ]
I will then loop the whole map for checking around 8^n times.
Question
I think there is cleverer method I didn't realize, I welcome any kind/language of answer or even just an idea. For sure, I will bounty for working ans with explanation.
Thanks.
At first you have to make strict definition - what region is called "surrounded". Perhaps possible approach is - the cells without free way to outer map edge.
To check this way - use any simple traversal algorithm - for example, DFS (path finding algorithms look overkill here - they need final point)
Concerning recursion - you need to mark seen cells to avoid rechecking .There are floodfill algorithms without recursion and with good complexity.
You are going about this backwards. You coding logic looks fine but reverse the logic.
For each 1 in existence check around it for possible other 1's. If you return via any path of 1's (from) that first 1 (to) that first 1 then you have found a closed loop. Find the direction that the returning path came from to retun to the first 1 and then that is where the inside of the loop is. Then, if you are not interested in any deeper loops, mark all that are inside (both 1's and 0's) of that loop as removed from further searching. Complete the search and then after the search is done, and only after the search is done, mark all inside of the loops as 1's (if that is what you want). That way if you have loops beside of loops, then you will not be restarting over and over again.
For sub loops: Consider all 1's as a potential starting of a loop. If you return to any previous 1's then find which direction that came from (in it's return path) and consider that the inside of a loop.
When all this is done, and you have found the loops, then make your changes. Do not be concerned if the loops have zero inner positions as 0 is a valid size, simply make all possible insides of loops changed as you decide.
Thanks.
B Lean
It seems a call to renderer.render(scene, camera) is mandatory to get the matrix propriety filled in.
For instance, have a scene with a cube :
var geo = new THREE.BoxGeometry(10,10,10);
var mesh = new THREE.Mesh(geo, mat)
mesh.position.set(50, 0 , -50)
scene.add(mesh)
with a call to renderer.render(scene, camera), the matrix.elements propriety of this mesh is :
[1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 50, 0, -50, 1]
and without :
[1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]
It seems only the positions are not set, when comparing the two arrays..
My use case is I only need the matrix of elements. And in order to increase performance, I think getting rid of the render phase, which I presume is the most costly, would be a good thing.
You can use either of the following two methods, as needed, to update an object's matrix transforms:
object.updateMatrix();,
object.updateMatrixWorld();
scene.updateMatrixWorld() will update the world matrices of the complete scene graph, and it is called for you automatically when you call renderer.render().
Check the source code of these methods so you understand what they are doing.
three.js r.81
I am currently working in a game spell system and I want to know if anyone knows a simple way to enlarge a matrix and also its values, almost like an image stretch.
I am using 2D matrices to represent the spell affected areas, so the below matrix represent the starting spell effect point and its effect area.
Example:
local area = {{0, 0, 1, 0, 0},
{0, 1, 1, 1, 0},
{1, 1, 3, 1, 1},
{0, 1, 1, 1, 0},
{0, 0, 1, 0, 0}}
Where:
3: Origin point (where the spell was cast)
1: Affected area relative to the origin point.
Taking this in consideration, I would like to develop a function to enlarge the matrix.
function matrix.enlarge(mtx, row, col) ... end
The abstraction and result of the following function taking the shown example of an area would be something like following:
local enlarged_matrix = matrix.enlarge(area, 2, 2)
matrix.print(enlarged_matrix)
--output
--local area = {{0, 0, 0, 1, 0, 0, 0},
-- {0, 0, 1, 1, 1, 0, 0},
-- {0, 1, 1, 1, 1, 1, 0},
-- {1, 1, 1, 3, 1, 1, 1},
-- {0, 1, 1, 1, 1, 1, 0},
-- {0, 0, 1, 1, 1, 0, 0},
-- {0, 0, 0, 1, 0, 0, 0}}
Several possibilities:
brute force: create new matrix, copy old into it:
function matrix.enlarge(area, horiz, vert)
local vertNow = #vert
local horizNow = #horiz
local newVert = vertNow + vert
local newHoriz = horizNow + horiz
-- create table of zeros
local newMatrix = {}
for i=1,newVert do
tt = {}
newMatrix[i] = tt
for j=1,newHoriz do
if i > vert/2 and i < vertNow + vert/2 and j > horiz/2 and j < horizNow + horiz/2 then
tt[j] = area[i][j]
else
tt[j] = 0
end
end
end
end
use formula: you have circular symmetry so just need radius, no need to store the value:
function distance(i,j)
return math.sqrt(i*i+j*j)
end
local dissip = 2 -- at a distance of "2", spell is e^(-0.5) of center
function getSpellStrength(dist) -- gaussian
return 3*math.exp(-math.pow(dist/dissip, 2))
end
val = getSpellStrength(distance(i,j))
If the actual computation of spell strength is heavy, and spread doesn't change often (say only when experience increases by a certain delta), then option 1 better. If spread changes quickly (say every time frame while spell taking effect), and spell strength as simple as gaussian, then option 2 better. For in-between cases it depends you'll have to try both. But #2 is simpler so I would favor it unless you can show that it is a performance bottleneck.
Also, the formula (option 2) is trivial to apply regardless of shape of room/area. If an enemy is at i1,j1, and caster at i2,j2, you can know immediately the spell strength at i1,j1 via distance(i1-i2,j1-j2), regardless of shape of room. You can also fairly easily combine effects of multiple spells, like a resistence spell by enemy (same distance formula).
If you really have to use matrix, and it must work for any shape, then probably this is best option:
scale the old matrix to a new matrix:
function enlarge(area, deltaX, deltaY)
sizeX = #(area[1])
sizeY = #area -- number of rows
scaleX = (sizeX + deltaX)/sizeX
scaleX = (sizeY + deltaY)/sizeY
newArea = {}
for iY=1, sizeY do
newRow = {}
newArea[iY] = newRow
fromY = round(iY/scaleY)
for jX=1, sizeX do
fromX = round(jX/scaleX)
if fromY < 1 or fromX < 1 or fromY > sizeY or fromX > sizeX then
val = 0
else
val = area[fromY][fromX]
end
newRow[jX] = val
end
end
return newArea
end
Here, you're basically creating a scaled version of the original (interpolation). WARNING: Not debugged, so you will have to massage this a bit (like there might be +-1 missing in a few places, you should declare your vars local, etc). And round() would be something like
function round(x)
return math.floor(num + 0.5)
end
But hopefully you can do the rest of the work :)
Right now processing a large amount of Json data coming from a Mixpanel API. With a small dataset, it's a breeze and the code below runs just fine. However, a large dataset takes a rather long time to process and we're starting to see timeouts because of it.
My Scala optimization skills are rather poor, so I am hoping someone can show a faster way to process the following with large data sets. Please do explain why since it will help my own understanding of Scala.
val people = parse[mp.data.Segmentation](o)
val list = people.data.values.map(b =>
b._2.map(p =>
Map(
"id" -> p._1,
"activity" -> p._2.foldLeft(0)(_+_._2)
)
)
)
.flatten
.filter{ behavior => behavior("activity") != 0 }
.groupBy(o => o("id"))
.map{ case (k,v) => Map("id" -> k, "activity" -> v.map( o => o("activity").asInstanceOf[Int]).sum) }
And that Segmentation class:
case class Segmentation(
val legend_size: Int,
val data: Data
)
case class Data(
val series: List[String],
val values: Map[String, Map[String, Map[String, Int]]]
)
Thanks for your help!
Edit: sample data as requested
{"legend_size": 4, "data": {"series": ["2013-12-17", "2013-12-18", "2013-12-19", "2013-12-20", "2013-12-21", "2013-12-22", "2013-12-23", "2013-12-24", "2013-12-25", "2013-12-26", "2013-12-27", "2013-12-28", "2013-12-29", "2013-12-30", "2013-12-31", "2014-01-01", "2014-01-02", "2014-01-03", "2014-01-04", "2014-01-05", "2014-01-06"], "values": {"afef4ac12a21d5c4ef679c6507fe65cd": {"id:twitter.com:194436690": {"2013-12-20": 0, "2013-12-29": 0, "2013-12-28": 0, "2013-12-23": 0, "2013-12-22": 0, "2013-12-21": 1, "2013-12-25": 0, "2013-12-27": 0, "2013-12-26": 0, "2013-12-24": 0, "2013-12-31": 0, "2014-01-06": 0, "2014-01-04": 0, "2014-01-05": 0, "2014-01-02": 0, "2014-01-03": 0, "2014-01-01": 0, "2013-12-30": 0, "2013-12-17": 0, "2013-12-18": 0, "2013-12-19": 0}, "id:twitter.com:330103796": {"2013-12-20": 0, "2013-12-29": 0, "2013-12-28": 0, "2013-12-23": 0, "2013-12-22": 0, "2013-12-21": 0, "2013-12-25": 0, "2013-12-27": 0, "2013-12-26": 1, "2013-12-24": 0, "2013-12-31": 0, "2014-01-06": 0, "2014-01-04": 0, "2014-01-05": 0, "2014-01-02": 0, "2014-01-03": 0, "2014-01-01": 0, "2013-12-30": 0, "2013-12-17": 0, "2013-12-18": 0, "2013-12-19": 0}, "id:twitter.com:216664121": {"2013-12-20": 0, "2013-12-29": 0, "2013-12-28": 0, "2013-12-23": 1, "2013-12-22": 0, "2013-12-21": 0, "2013-12-25": 0, "2013-12-27": 0, "2013-12-26": 0, "2013-12-24": 0, "2013-12-31": 0, "2014-01-06": 0, "2014-01-04": 0, "2014-01-05": 0, "2014-01-02": 0, "2014-01-03": 0, "2014-01-01": 0, "2013-12-30": 0, "2013-12-17": 0, "2013-12-18": 0, "2013-12-19": 0}, "id:twitter.com:414117608": {"2013-12-20": 0, "2013-12-29": 0, "2013-12-28": 1, "2013-12-23": 0, "2013-12-22": 0, "2013-12-21": 0, "2013-12-25": 0, "2013-12-27": 0, "2013-12-26": 0, "2013-12-24": 0, "2013-12-31": 0, "2014-01-06": 0, "2014-01-04": 0, "2014-01-05": 0, "2014-01-02": 0, "2014-01-03": 0, "2014-01-01": 0, "2013-12-30": 0, "2013-12-17": 0, "2013-12-18": 0, "2013-12-19": 0}}}}}
To answer Millhouse's question, the intention is to sum up each date to provide a number that describes total volume of "activity" for each ID. The "ID" is formatted as id:twitter.com:923842.
I don't know the full extent of your processing, what pipelines you have going on, what stress your server is under or what sort of threading profile you've set up to receive the information. However, assuming that you've correctly separated I/O from CPU bound tasks and what you've shown us is strictly CPU bound try simply adding .par to the very first Map.
people.data.values.par.map(b =>
as a first pass to see if you can get some performance gains. I don't see any specific ordering required of the processing which tells me it's ripe for parallelization.
Edit
After playing around with parallelization, I would add that modifying the TaskSupport is helpful for this case. You can modify a parallelized collection's tasksupport as such:
import scala.collection.parallel._
val pc = mutable.ParArray(1, 2, 3)
pc.tasksupport = new ForkJoinTaskSupport(
new scala.concurrent.forkjoin.ForkJoinPool(2))
See http://www.scala-lang.org/api/2.10.3/index.html#scala.collection.parallel.TaskSupport
I have some suggestions that might help.
I would try to move the filter command as early in the program as
possible. Since your data contains many dates with 0 activity you
would see improvements doing this. The best solution might be to
test for this while parsing the json data. If this is not possible
make it the first statement.
The way I understand it you would like to end up with a way to look up a aggregate of
the sums for a given id. I would suggest you represent this with a map from the id
to the aggregate. Also the scala List class has a sum function.
I came up with this code:
val originalList_IdToAggregate = people.data.values.map(p=> (p._2._1,
p._2._2.sum) );
It might not match your project directly, but I think it is almost what you need.
If you need to make a map of this you just append toMap to the end.
If this doesn't give you enough speed you could create your own parser that aggregates
and filters while parsing only this kind of json.
Writing parsers is quite easy in scala if you are using the parser combinators.
Just keep in mind to throw away what you don't need as early as possible and not to make
too many deep branches this should be a fast solution with a low memory footprint.
As for going parallel this can be a good idea. I don't know enough about
your application to tell you what is the best way, but it might be possible
to hide the computational cost of processing the data under the cost of
transporting the data. Try to balance parsing and io over multiple
threads and see if you can achieve this.
I am exporting data from mathematica in this manner to a file with "dat" extension.
numbercount=0;
exporttable =
TableForm[
Flatten[
Table[
Table[
Table[{++numbercount, xcord, ycord, zcord}, {xcord, 0, 100, 5}],
{ycord, 0, 100, 5}],
{zcord,10, 100, 10}],
2]];
Export["mydata.dat", exporttable]
Now what happens is the "mydata.dat" file the output appears like this
1 0 0 10
2 5 0 10
3 10 0 10 and so on
But I want the data to appear like this in the "mydata.dat" file.
1, 0, 0, 10
2, 5, 0, 10
3, 10, 0, 10 and so on
If you observer I want a comma after every first,second and third number but not after the fourth number in each line.
I have tried this code it inserts the commas between the number But it takes a long time to run as I have huge amounts of data to be exported.I also feel that someone can perhaps come up with a better solution.
numbercount=0;
exporttable =Flatten[
Table[
Table[
Table[{++numbercount, xcord, ycord, zcord}, {xcord, 0, 100, 5}],
{ycord, 0, 100, 5}],
{zcord,10, 100, 10}],
2];
x = TableForm[Insert[
exporttable[[i]], ",", {{2}, {3}, {4}}], {i, 1, Length[exporttable]}];
Export["mydata.dat", x]
Have you tried exporting it as a CSV file? The third parameter of Export is file type, so you'd type
Export["mydata.dat", x, "CSV"]
To add to this, here is a categorical list and an alphabetical list of the available formats in Mathematica.
As an aside note, please note that you can build your list with only one Table command, and without explicit index variables:
exporttable1 = MapIndexed[Join[#2, #1] &,
Flatten[Table[{xcord, ycord, zcord},
{zcord, 10, 100, 10},
{ycord, 0, 100, 5},
{xcord, 0, 100, 5}], 2]]
exporttable1 == exporttable
(*
-> True
*)