Date and Time Objects in Mathematica - wolfram-mathematica

I want to use the DateObject in Mathematica to calculate the difference in times between two cities.
How do I convert the DateObject and TimeObject output to numerical values so I can manipulate them and plot them?

You can obtain numerical values using DateList e.g.
d = Today
t = TimeObject[Now]
o = DateObject[d, t]
{dt, tm} = TakeDrop[DateList[o], 3]
DateString[o, {"DayName", ", ", "DayShort", " ", "MonthName"}]

By putting your code into comments you've made it very difficult to be certain what you have done, for example, it's no surprise that the expression
GeoPosition[Toronto]
is unevaluated and that the enclosing expressions do not do what you want them to do. Nevertheless, guessing at what you might be trying to do ...
If I execute the following:
sList = Table[Sunrise[Entity["City", {"Toronto", "Ontario", "Canada"}],
DateObject[{2022, 1, 9 + i, 0, 0, 0}]], {i, 0, 9}]
my Mathematica (v12.something on Mac) returns a list of 10 DateObjects, starting with
DateObject[{2022, 1, 9, 12, 50}, "Minute", "Gregorian", 0.]
And if I execute
AbsoluteTime /# sList
MMA returns a list of 10 absolute times.
Now, set
t1 = DateObject[{2022, 1, 9, 12, 50}, "Minute", "Gregorian", 0.]
and, using another city ...
t2 = Sunrise[Entity["City", {"Liverpool", "Liverpool", "UnitedKingdom"}],
DateObject[{2022, 1, 9, 0, 0, 0}]]
then
DateDifference[t1, t2]
returns
Quantity[-0.18472222222222223, "Days"]
You'll notice that I have not bothered with the time zone for the DateObject (irrelevant for calculating differences between cities) and I have not wrapped Toronto in GeoPosition, MMA is smart enough to not need that assistance.

The objective of the file is to calculate the numbers of hours of daylight in Toronto and Edmonton. First, create a counter called "daysElapsed"
daysElapsed=365;
Next, create a table of sunrises and sunsets. These are returned Date Objects / Quantities.
sunriseToList =
Table[Sunrise[Entity["City", {"Toronto", "Ontario", "Canada"}],
DateObject[{2022, 1, 1 + i, 0, 0, 0}]], {i, 0, daysElapsed}];
sunsetToList =
Table[Sunset[Entity["City", {"Toronto", "Ontario", "Canada"}],
DateObject[{2022, 1, 1 + i, 0, 0, 0}]], {i, 0, daysElapsed}];
Using the function, AbsoluteTime, convert the two lists of Date Objects into times in milliseconds. This allows you to manipulate the data easily.
sunrisenumTo = AbsoluteTime /# sunriseToList;
sunsetnumTo = AbsoluteTime /# sunsetToList;
Subtracting the time of sunrise from the time of sunset gives the total time of daylight.
hoursoflightTo = N[(sunsetnumTo - sunrisenumTo)/60/60];
Repeat the above process for the next city: Edmonton
sunriseEdList =
Table[Sunrise[Entity["City", {"Edmonton", "Alberta", "Canada"}],
DateObject[{2022, 1, 1 + i, 0, 0, 0}]], {i, 0, daysElapsed}];
sunsetEdList =
Table[Sunset[Entity["City", {"Edmonton", "Alberta", "Canada"}],
DateObject[{2022, 1, 1 + i, 0, 0, 0}]], {i, 0, daysElapsed}];
sunrisenumEd = AbsoluteTime /# sunriseEdList;
sunsetnumEd = AbsoluteTime /# sunsetEdList;
hoursoflightEd = N[(sunsetnumEd - sunrisenumEd)/60/60];
t = hoursoflightTo - hoursoflightEd;
The first plot below shows the difference in hours of sunlight. As Edmonton is in the far north, it gets less light in the winter and way more in the summer.
plotT = ListLinePlot[hoursoflightTo - hoursoflightEd]
Difference in hours of light between cities
This plots the hours of light for each city over 365 days.
ListPlot[{hoursoflightEd, hoursoflightTo},
ColorFunctionScaling -> True]
Hours of Light

Related

Lua Proportional Matrix Enlargement with Stretched Values

I am currently working in a game spell system and I want to know if anyone knows a simple way to enlarge a matrix and also its values, almost like an image stretch.
I am using 2D matrices to represent the spell affected areas, so the below matrix represent the starting spell effect point and its effect area.
Example:
local area = {{0, 0, 1, 0, 0},
{0, 1, 1, 1, 0},
{1, 1, 3, 1, 1},
{0, 1, 1, 1, 0},
{0, 0, 1, 0, 0}}
Where:
3: Origin point (where the spell was cast)
1: Affected area relative to the origin point.
Taking this in consideration, I would like to develop a function to enlarge the matrix.
function matrix.enlarge(mtx, row, col) ... end
The abstraction and result of the following function taking the shown example of an area would be something like following:
local enlarged_matrix = matrix.enlarge(area, 2, 2)
matrix.print(enlarged_matrix)
--output
--local area = {{0, 0, 0, 1, 0, 0, 0},
-- {0, 0, 1, 1, 1, 0, 0},
-- {0, 1, 1, 1, 1, 1, 0},
-- {1, 1, 1, 3, 1, 1, 1},
-- {0, 1, 1, 1, 1, 1, 0},
-- {0, 0, 1, 1, 1, 0, 0},
-- {0, 0, 0, 1, 0, 0, 0}}
Several possibilities:
brute force: create new matrix, copy old into it:
function matrix.enlarge(area, horiz, vert)
local vertNow = #vert
local horizNow = #horiz
local newVert = vertNow + vert
local newHoriz = horizNow + horiz
-- create table of zeros
local newMatrix = {}
for i=1,newVert do
tt = {}
newMatrix[i] = tt
for j=1,newHoriz do
if i > vert/2 and i < vertNow + vert/2 and j > horiz/2 and j < horizNow + horiz/2 then
tt[j] = area[i][j]
else
tt[j] = 0
end
end
end
end
use formula: you have circular symmetry so just need radius, no need to store the value:
function distance(i,j)
return math.sqrt(i*i+j*j)
end
local dissip = 2 -- at a distance of "2", spell is e^(-0.5) of center
function getSpellStrength(dist) -- gaussian
return 3*math.exp(-math.pow(dist/dissip, 2))
end
val = getSpellStrength(distance(i,j))
If the actual computation of spell strength is heavy, and spread doesn't change often (say only when experience increases by a certain delta), then option 1 better. If spread changes quickly (say every time frame while spell taking effect), and spell strength as simple as gaussian, then option 2 better. For in-between cases it depends you'll have to try both. But #2 is simpler so I would favor it unless you can show that it is a performance bottleneck.
Also, the formula (option 2) is trivial to apply regardless of shape of room/area. If an enemy is at i1,j1, and caster at i2,j2, you can know immediately the spell strength at i1,j1 via distance(i1-i2,j1-j2), regardless of shape of room. You can also fairly easily combine effects of multiple spells, like a resistence spell by enemy (same distance formula).
If you really have to use matrix, and it must work for any shape, then probably this is best option:
scale the old matrix to a new matrix:
function enlarge(area, deltaX, deltaY)
sizeX = #(area[1])
sizeY = #area -- number of rows
scaleX = (sizeX + deltaX)/sizeX
scaleX = (sizeY + deltaY)/sizeY
newArea = {}
for iY=1, sizeY do
newRow = {}
newArea[iY] = newRow
fromY = round(iY/scaleY)
for jX=1, sizeX do
fromX = round(jX/scaleX)
if fromY < 1 or fromX < 1 or fromY > sizeY or fromX > sizeX then
val = 0
else
val = area[fromY][fromX]
end
newRow[jX] = val
end
end
return newArea
end
Here, you're basically creating a scaled version of the original (interpolation). WARNING: Not debugged, so you will have to massage this a bit (like there might be +-1 missing in a few places, you should declare your vars local, etc). And round() would be something like
function round(x)
return math.floor(num + 0.5)
end
But hopefully you can do the rest of the work :)

Exporting Data from Mathematica with commas

I am exporting data from mathematica in this manner to a file with "dat" extension.
numbercount=0;
exporttable =
TableForm[
Flatten[
Table[
Table[
Table[{++numbercount, xcord, ycord, zcord}, {xcord, 0, 100, 5}],
{ycord, 0, 100, 5}],
{zcord,10, 100, 10}],
2]];
Export["mydata.dat", exporttable]
Now what happens is the "mydata.dat" file the output appears like this
1 0 0 10
2 5 0 10
3 10 0 10 and so on
But I want the data to appear like this in the "mydata.dat" file.
1, 0, 0, 10
2, 5, 0, 10
3, 10, 0, 10 and so on
If you observer I want a comma after every first,second and third number but not after the fourth number in each line.
I have tried this code it inserts the commas between the number But it takes a long time to run as I have huge amounts of data to be exported.I also feel that someone can perhaps come up with a better solution.
numbercount=0;
exporttable =Flatten[
Table[
Table[
Table[{++numbercount, xcord, ycord, zcord}, {xcord, 0, 100, 5}],
{ycord, 0, 100, 5}],
{zcord,10, 100, 10}],
2];
x = TableForm[Insert[
exporttable[[i]], ",", {{2}, {3}, {4}}], {i, 1, Length[exporttable]}];
Export["mydata.dat", x]
Have you tried exporting it as a CSV file? The third parameter of Export is file type, so you'd type
Export["mydata.dat", x, "CSV"]
To add to this, here is a categorical list and an alphabetical list of the available formats in Mathematica.
As an aside note, please note that you can build your list with only one Table command, and without explicit index variables:
exporttable1 = MapIndexed[Join[#2, #1] &,
Flatten[Table[{xcord, ycord, zcord},
{zcord, 10, 100, 10},
{ycord, 0, 100, 5},
{xcord, 0, 100, 5}], 2]]
exporttable1 == exporttable
(*
-> True
*)

Optimizing calculation with lists of matrices within a Picard Iteration

Currently I am working with some Mathematica code to do a Picard Iteration. The code itself works fine but I am trying to make it more efficient. I have had some success but am looking for suggestions. It may not be possible to speed it up anymore but I have run out of ideas and am hoping people with more experience with programming/Mathematica than me might be able to make some suggestions. I am only posting the Iteration itself but can supply additional information as needed.
The code below was edited to be a fully executable as requested
Also I changed it from a While to a Do loop to make testing easier as convergence is not required.
Clear["Global`*"]
ngrid = 2048;
delr = 4/100;
delk = \[Pi]/delr/ngrid;
rvalues = Table[(i - 1/2) delr, {i, 1, ngrid}];
kvalues = Table[(i - 1/2) delk, {i, 1, ngrid}];
wa[x_] := (19 + .5 x) Exp[-.7 x] + 1
wb[x_] := (19 + .1 x) Exp[-.2 x] + 1
wd = SetPrecision[
Table[{{wa[(i - 1/2) delk], 0}, {0, wb[(i - 1/2) delk]}}, {i, 1,
ngrid}], 26];
sigmaAA = 1;
hcloseAA = {};
i = 1;
While[(i - 1/2)*delr < sigmaAA, hcloseAA = Append[hcloseAA, -1]; i++]
hcloselenAA = Length[hcloseAA];
hcloseAB = hcloseAA;
hcloselenAB = hcloselenAA;
hcloseBB = hcloseAA;
hcloselenBB = hcloselenAA;
ccloseAA = {};
i = ngrid;
While[(i - 1/2)*delr >= sigmaAA, ccloseAA = Append[ccloseAA, 0]; i--]
ccloselenAA = Length[ccloseAA];
ccloselenAA = Length[ccloseAA];
ccloseAB = ccloseAA;
ccloselenAB = ccloselenAA;
ccloseBB = ccloseAA;
ccloselenBB = ccloselenAA;
na = 20;
nb = 20;
pa = 27/(1000 \[Pi]);
pb = 27/(1000 \[Pi]);
p = {{na pa, 0}, {0, nb pb}};
id = {{1, 0}, {0, 1}};
AFD = 1;
AFDList = {};
timelist = {};
gammainitial = Table[{{0, 0}, {0, 0}}, {ngrid}];
gammafirst = gammainitial;
step = 1;
tol = 10^-7;
old = 95/100;
new = 1 - old;
Do[
t = AbsoluteTime[];
extractgAA = Table[Extract[gammafirst, {i, 1, 1}], {i, hcloselenAA}];
extractgBB = Table[Extract[gammafirst, {i, 2, 2}], {i, hcloselenBB}];
extractgAB = Table[Extract[gammafirst, {i, 1, 2}], {i, hcloselenAB}];
csolutionAA = (Join[hcloseAA - extractgAA, ccloseAA]) rvalues;
csolutionBB = (Join[hcloseBB - extractgBB, ccloseBB]) rvalues;
csolutionAB = (Join[hcloseAB - extractgAB, ccloseAB]) rvalues;
chatAA = FourierDST[SetPrecision[csolutionAA, 32], 4];
chatBB = FourierDST[SetPrecision[csolutionBB, 32], 4];
chatAB = FourierDST[SetPrecision[csolutionAB, 32], 4];
chatmatrix =
2 \[Pi] delr Sqrt[2*ngrid]*
Transpose[{Transpose[{chatAA, chatAB}],
Transpose[{chatAB, chatBB}]}]/kvalues;
gammahat =
Table[(wd[[i]].chatmatrix[[i]].(Inverse[
id - p.wd[[i]].chatmatrix[[i]]]).wd[[i]] -
chatmatrix[[i]]) kvalues[[i]], {i, ngrid}];
gammaAA =
FourierDST[SetPrecision[Table[gammahat[[i, 1, 1]], {i, ngrid}], 32],
4];
gammaBB =
FourierDST[SetPrecision[Table[gammahat[[i, 2, 2]], {i, ngrid}], 32],
4];
gammaAB =
FourierDST[SetPrecision[Table[gammahat[[i, 1, 2]], {i, ngrid}], 32],
4];
gammasecond =
Transpose[{Transpose[{gammaAA, gammaAB}],
Transpose[{gammaAB, gammaBB}]}]/(rvalues 2 \[Pi] delr Sqrt[
2*ngrid]);
AFD = Sqrt[
1/ngrid Sum[((gammafirst[[i, 1, 1]] -
gammasecond[[i, 1, 1]])/(gammafirst[[i, 1, 1]] +
gammasecond[[i, 1, 1]]))^2 + ((gammafirst[[i, 2, 2]] -
gammasecond[[i, 2, 2]])/(gammafirst[[i, 2, 2]] +
gammasecond[[i, 2, 2]]))^2 + ((gammafirst[[i, 1, 2]] -
gammasecond[[i, 1, 2]])/(gammafirst[[i, 1, 2]] +
gammasecond[[i, 1, 2]]))^2 + ((gammafirst[[i, 2, 1]] -
gammasecond[[i, 2, 1]])/(gammafirst[[i, 2, 1]] +
gammasecond[[i, 2, 1]]))^2, {i, 1, ngrid}]];
gammafirst = old gammafirst + new gammasecond;
time2 = AbsoluteTime[] - t;
timelist = Append[timelist, time2], {1}]
Print["Mean time per calculation = ", Mean[timelist]]
Print["STD time per calculation = ", StandardDeviation[timelist]]
Just some notes on things
ngrid,delr, delk, rvalues, kvalues are just the values used in making the problem discrete. Typically they are
ngrid = 2048;
delr = 4/100;
delk = \[Pi]/delr/ngrid;
rvalues = Table[(i - 1/2) delr, {i, 1, ngrid}];
kvalues = Table[(i - 1/2) delk, {i, 1, ngrid}];
All matrices being used are 2 x 2 with identical off-diagonals
The identity matrix and the P matrix(it is actually for the density) are
p = {{na pa, 0}, {0, nb pb}};
id = {{1, 0}, {0, 1}};
The major slow spots in the calculation I have identified are the FourierDST calculations (the forward and back transforms account for close to 40% of the calculation time) The gammahat calculation accounts for 40% of the time with the remaining time dominated by the AFD calculation.)
On my i7 Processor the average calculation time per cycle is 1.52 seconds. My hope is to get it under a second but that may not be possible.
My hope had been to introduce some parallel computation this was tried with both ParallelTable commands as well as using the ParallelSubmit WaitAll. However, I found that any speedup from the parallel calculation was offset by the communication time from the Master Kernel to the the other Kernels.(at least that is my assumption as calculations on new data takes twice as long as just recalculating the existing data. I assumed this meant that the slowdown was in disseminating the new lists) I played around with DistributDefinitions as well as SetSharedVariable, however, was unable to get that to do anything.
One thing I am wondering is if using Table for doing my discrete calculations is the best way to do this?
I had also thought I could possibly rewrite this in such a manner as to be able to compile it but my understanding is that only will work if you are dealing with machine precision where I am needing to working with higher precision to get convergence.
Thank you in advance for any suggestions.
I will wait for the code acl suggests, but off the top, I suspect that this construct:
Table[Extract[gammafirst, {i, 1, 1}], {i, hcloselenAA}]
may be written, and will execute faster, as:
gammafirst[[hcloselenAA, 1, 1]]
But I am forced to guess the shape of your data.
In the several lines using:
FourierDST[SetPrecision[Table[gammahat[[i, 1, 1]], {i, ngrid}], 32], 4];
you could remove the Table:
FourierDST[SetPrecision[gammahat[[All, 1, 1]], 32], 4];
And, if you really, really need this SetPrecision, couldn't you do it at once in the calculation of gammahat?
AFAI can see, all numbers used in the calculations of gammahat are exact. This may be on purpose but it is slow. You might consider using approximate numbers instead.
EDIT
With the complete code in your latest edit just adding an //N to your 2nd and 3rd line cuts timing at least in half without reducing numerical accuracy much. If I compare all the numbers in res={gammafirst, gammasecond, AFD} the difference between the original and with //N added is res1 - res2 // Flatten // Total ==> 1.88267*10^-13
Removing all the SetPrecision stuff speeds up the code by a factor of 7 and the results seem to be of similar accuracy.

Definition lookup speed: a performance issue

I have the following problem.
I need to build a very large number of definitions(*) such as
f[{1,0,0,0}] = 1
f[{0,1,0,0}] = 2
f[{0,0,1,0}] = 3
f[{0,0,0,1}] = 2
...
f[{2,3,1,2}] = 4
...
f[{n1,n2,n3,n4}] = some integer
...
This is just an example. The length of the argument list does not need to be 4 but can be anything.
I realized that the lookup for each value slows down with exponential complexity when the length of the argument list increases. Perhaps this is not so strange, since it is clear that in principle there is a combinatorial explosion in how many definitions Mathematica needs to store.
Though, I have expected Mathematica to be smart and that value extract should be constant time complexity. Apparently it is not.
Is there any way to speed up lookup time? This probably has to do with how Mathematica internally handles symbol definition lookups. Does it phrases the list until it finds the match? It seems that it does so.
All suggestions highly appreciated.
With best regards
Zoran
(*) I am working on a stochastic simulation software that generates all configurations of a system and needs to store how many times each configuration occurred. In that sense a list {n1, n2, ..., nT} describes a particular configuration of the system saying that there are n1 particles of type 1, n2 particles of type 2, ..., nT particles of type T. There can be exponentially many such configurations.
Could you give some detail on how you worked out that lookup time is exponential?
If it is indeed exponential, perhaps you could speed things up by using Hash on your keys (configurations), then storing key-value pairs in a list like {{key1,value1},{key2,value2}}, kept sorted by key and then using binary search (which should be log time). This should be very quick to code up but not optimum in terms of speed.
If that's not fast enough, one could think about setting up a proper hashtable implementation (which I thought was what the f[{0,1,0,1}]=3 approach did, without having checked).
But some toy example of the slowdown would be useful to proceed further...
EDIT: I just tried
test[length_] := Block[{f},
Do[
f[RandomInteger[{0, 10}, 100]] = RandomInteger[0, 10];,
{i, 1, length}
];
f[{0, 0, 0, 0, 1, 7, 0, 3, 7, 8, 0, 4, 5, 8, 0, 8, 6, 7, 7, 0, 1, 6,
3, 9, 6, 9, 2, 7, 2, 8, 1, 1, 8, 4, 0, 5, 2, 9, 9, 10, 6, 3, 6,
8, 10, 0, 7, 1, 2, 8, 4, 4, 9, 5, 1, 10, 4, 1, 1, 3, 0, 3, 6, 5,
4, 0, 9, 5, 4, 6, 9, 6, 10, 6, 2, 4, 9, 2, 9, 8, 10, 0, 8, 4, 9,
5, 5, 9, 7, 2, 7, 4, 0, 2, 0, 10, 2, 4, 10, 1}] // timeIt
]
with timeIt defined to accurately time even short runs as follows:
timeIt::usage = "timeIt[expr] gives the time taken to execute expr,
repeating as many times as necessary to achieve a total time of \
1s";
SetAttributes[timeIt, HoldAll]
timeIt[expr_] := Module[{t = Timing[expr;][[1]], tries = 1},
While[t < 1.,
tries *= 2;
t = Timing[Do[expr, {tries}];][[1]];
];
Return[t/tries]]
and then
out = {#, test[#]} & /# {10, 100, 1000, 10000, 100000, 100000};
ListLogLogPlot#out
(also for larger runs). So it seems constant time here.
Suppose you enter your information not like
f[{1,0,0,0}] = 1
f[{0,1,0,0}] = 2
but into a n1 x n2 x n3 x n4 matrix m like
m[[2,1,1,1]] = 1
m[[1,2,1,1]] = 2
etc.
(you could even enter values not as f[{1,0,0,0}]=1, but as f[{1,0,0,0},1] with
f[li_List, i_Integer] := Part[m, Apply[Sequence, li + 1]] = i;
f[li_List] := Part[m, Apply[Sequence, li + 1]];
where you have to initialize m e.g. by m = ConstantArray[0, {4, 4, 4, 4}];)
Let's compare timings:
testf[z_] :=
(
Do[ f[{n1, n2, n3, n4}] = RandomInteger[{1,100}], {n1,z}, {n2,z}, {n3,z},{n4,z}];
First[ Timing[ Do[ f[{n2, n4, n1, n3}], {n1, z}, {n2, z}, {n3, z}, {n4, z} ] ] ]
);
Framed[
ListLinePlot[
Table[{z, testf[z]}, {z, 22, 36, 2}],
PlotLabel -> Row[{"DownValue approach: ",
Round[MemoryInUse[]/1024.^2],
" MB needed"
}],
AxesLabel -> {"n1,n2,n3,n4", "time/s"},ImageSize -> 500
]
]
Clear[f];
testf2[z_] :=
(
m = RandomInteger[{1, 100}, {z, z, z, z}];
f2[ni__Integer] := m[[Sequence ## ({ni} + 1)]];
First[ Timing[ Do[ f2[{n2, n4, n1, n3}], {n1, z}, {n2, z}, {n3, z}, {n4, z}] ] ]
)
Framed[
ListLinePlot[
Table[{z, testf2[z]}, {z, 22, 36, 2}],
PlotLabel -> Row[{"Matrix approach: ",
Round[MemoryInUse[]/1024.^2],
" MB needed"
}],
AxesLabel -> {"n1,n2,n3,n4", "time/s"}, ImageSize -> 500
]
]
gives
So for larger sets up information a matrix approach seems clearly preferrable.
Of course, if you have truly large data, say more GB than you have RAM, then you just
have to use a database and DatabaseLink.

Permutations distinct under given symmetry (Mathematica 8 group theory)

Given a list of integers like {2,1,1,0} I'd like to list all permutations of that list that are not equivalent under given group. For instance, using symmetry of the square, the result would be {{2, 1, 1, 0}, {2, 1, 0, 1}}.
Approach below (Mathematica 8) generates all permutations, then weeds out the equivalent ones. I can't use it because I can't afford to generate all permutations, is there a more efficient way?
Update: actually, the bottleneck is in DeleteCases. The following list {2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0} has about a million permutations and takes 0.1 seconds to compute. Apparently there are supposed to be 1292 orderings after removing symmetries, but my approach doesn't finish in 10 minutes
removeEquivalent[{}] := {};
removeEquivalent[list_] := (
Sow[First[list]];
equivalents = Permute[First[list], #] & /# GroupElements[group];
DeleteCases[list, Alternatives ## equivalents]
);
nonequivalentPermutations[list_] := (
reaped = Reap#FixedPoint[removeEquivalent, Permutations#list];
reaped[[2, 1]]
);
group = DihedralGroup[4];
nonequivalentPermutations[{2, 1, 1, 0}]
What's wrong with:
nonequivalentPermutations[list_,group_]:= Union[Permute[list,#]& /# GroupElements[group];
nonequivalentPermutations[{2,1,1,0},DihedralGroup[4]]
I don't have Mathematica 8, so I can't test this. I just have Mathematica 7.
I got an elegant and fast solution from Maxim Rytin, relying on ConnectedComponents function
Module[{gens, verts, edges},
gens = PermutationList /# GroupGenerators#DihedralGroup[16];
verts =
Permutations#{2, 2, 2, 2, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0};
edges = Join ## (Transpose#{verts, verts[[All, #]]} &) /# gens;
Length#ConnectedComponents#Graph[Rule ### Union#edges]] // Timing

Resources