Mathematica TimeSeries divide each Month by mean - wolfram-mathematica

I have a Dataset of Climate data of 2 years. I would like to divide each Month by its Mean Value. The firs Part is the is each date followed by two Values.
As Output I would like a Table where each Value is divided by the monthly Mean.
I already tried TimeSeriesAggregate and TimeSeriesWindow but i canĀ“t get it done. I can get the Month mean for one row by monat = TimeSeriesAggregate[UNI, "Month"] .
thanks for your help.
{{{2012, 5, 2}, 560.352, 569.852}, {{2012, 5, 3}, 468.519,
359.593}, {{2012, 5, 4}, 227.648, 236.704}, {{2012, 5, 5}, 640.056,
505.833}, {{2012, 5, 6}, 538.426, 537.907}, {{2012, 5, 7}, 312.389,
318.519}, {{2012, 5, 8}, 732.574, 706.852}, {{2012, 5, 9}, 725.167,
692.926}, {{2012, 5, 10}, 679.852, 672.481}, {{2012, 5, 11},
657.389, 640.148}, {{2012, 5, 12}, 399.463,
382.519}, {{2012, 5, 13}, 247.815, 322.981}, {{2012, 5, 14},
319.093, 337.444}, {{2012, 5, 15}, 742.333,
716.889}, {{2012, 5, 16}, 182.296, 179.444}, {{2012, 5, 17},
693.963, 672.5}, {{2012, 5, 18}, 755.222, 736.778}, {{2012, 5, 19},
740.667, 728.667}, {{2012, 5, 20}, 716.778,
701.722}, {{2012, 5, 21}, 161.167, 147.778}, {{2012, 5, 22}, 64.463,
69.6111}, {{2012, 5, 23}, 527.222, 482.741}, {{2012, 5, 24},
578.648, 611.981}, {{2012, 5, 25}, 524.093,
520.685}, {{2012, 5, 26}, 516.704, 562.704}, {{2012, 5, 27},
448.093, 403.296}, {{2012, 5, 28}, 590.444,
610.741}, {{2012, 5, 29}, 621.074, 712.556}, {{2012, 5, 30},
553.481, 525.167}, {{2012, 5, 31}, 495.093, 477.907}, {{2012, 6, 1},
554.315, 544.}, {{2012, 6, 2}, 267.907, 315.556}, {{2012, 6, 3},
724.815, 695.444}, {{2012, 6, 4}, 276.426, 283.981}, {{2012, 6, 5},
727.185, 708.926}, {{2012, 6, 6}, 578.185, 581.056}, {{2012, 6, 7},
677.056, 655.481}, {{2012, 6, 8}, 613.537, 621.685}, {{2012, 6, 9},
83.0926, 81.7593}, {{2012, 6, 10}, 136.481, 97.963}, {{2012, 6, 11},
194.944, 136.167}, {{2012, 6, 12}, 239.722,
226.537}, {{2012, 6, 13}, 436.833, 414.611}, {{2012, 6, 14}, 609.37,
736.704}, {{2012, 6, 15}, 756.037, 722.259}, {{2012, 6, 16},
763.907, 746.167}, {{2012, 6, 17}, 748.407,
733.685}, {{2012, 6, 18}, 740.981, 732.759}, {{2012, 6, 19}, 571.,
554.759}, {{2012, 6, 20}, 709.037, 696.685}, {{2012, 6, 21},
608.481, 612.315}, {{2012, 6, 22}, 412.963, 421.5}, {{2012, 6, 23},
509.963, 516.574}, {{2012, 6, 24}, 740.148,
712.093}, {{2012, 6, 25}, 81.1481, 83.9259}, {{2012, 6, 26},
655.222, 676.944}, {{2012, 6, 27}, 581.074,
696.667}, {{2012, 6, 28}, 694.519, 701.407}, {{2012, 6, 29},
599.111, 552.722}, {{2012, 6, 30}, 720.204, 709.278}, {{2012, 7, 1},
689.556, 677.407}, {{2012, 7, 2}, 697.333, 675.333}, {{2012, 7, 3},
534.37, 517.704}, {{2012, 7, 4}, 589.5, 601.63}, {{2012, 7, 5},
545.667, 518.889}, {{2012, 7, 6}, 680.13, 661.759}, {{2012, 7, 7},
725.13, 715.759}, {{2012, 7, 8}, 732.278, 661.333}, {{2012, 7, 9},
471.741, 581.333}, {{2012, 7, 10}, 680.944,
654.741}, {{2012, 7, 11}, 564.63, 520.037}, {{2012, 7, 12}, 316.667,
306.019}, {{2012, 7, 13}, 98.2037, 79.2222}, {{2012, 7, 14},
550.407, 508.222}, {{2012, 7, 15}, 199.87, 223.667}, {{2012, 7, 16},
520.426, 595.185}, {{2012, 7, 17}, 544.778,
563.426}, {{2012, 7, 18}, 608.741, 679.333}, {{2012, 7, 19},
685.907, 672.389}, {{2012, 7, 20}, 411.704,
425.296}, {{2012, 7, 21}, 62.5926, 57.0185}, {{2012, 7, 22},
444.907, 527.093}, {{2012, 7, 23}, 592.759,
604.741}, {{2012, 7, 24}, 535.13, 488.315}, {{2012, 7, 25}, 389.481,
358.444}, {{2012, 7, 26}, 564.537, 516.481}, {{2012, 7, 27},
618.074, 608.278}, {{2012, 7, 28}, 665.167,
653.278}, {{2012, 7, 29}, 428.87, 439.778}, {{2012, 7, 30}, 496.485,
450.1}, {{2012, 7, 31}, 693.241, 683.222}, {{2012, 8, 1}, 618.278,
658.963}, {{2012, 8, 2}, 645.741, 654.537}, {{2012, 8, 3}, 427.519,
396.648}, {{2012, 8, 4}, 681.074, 661.63}, {{2012, 8, 5}, 656.963,
649.167}, {{2012, 8, 6}, 666.722, 641.907}, {{2012, 8, 7}, 565.463,
501.407}, {{2012, 8, 8}, 298.056, 389.444}, {{2012, 8, 9}, 335.704,
271.796}, {{2012, 8, 10}, 441.278, 403.722}, {{2012, 8, 11}, 357.5,
338.5}, {{2012, 8, 12}, 704.889, 690.537}, {{2012, 8, 13}, 542.389,
672.407}, {{2012, 8, 14}, 660.611, 653.167}, {{2012, 8, 15},
644.519, 602.593}, {{2012, 8, 16}, 582.056,
557.444}, {{2012, 8, 17}, 557.796, 607.815}, {{2012, 8, 18},
580.556, 609.056}, {{2012, 8, 19}, 633.019,
622.981}, {{2012, 8, 20}, 643.87, 629.13}, {{2012, 8, 21}, 544.889,
520.5}, {{2012, 8, 22}, 414.111, 480.13}, {{2012, 8, 23}, 478.167,
471.019}, {{2012, 8, 24}, 581.222, 576.5}, {{2012, 8, 25}, 599.481,
585.519}, {{2012, 8, 26}, 47.0926, 45.2963}, {{2012, 8, 27},
624.722, 545.204}, {{2012, 8, 28}, 625.333,
611.778}, {{2012, 8, 29}, 579.389, 569.13}, {{2012, 8, 30}, 562.315,
537.667}, {{2012, 8, 31}, 77.2963, 74.2593}, {{2012, 9, 1},
65.3148, 64.8889}, {{2012, 9, 2}, 528.056, 530.963}, {{2012, 9, 3},
455.519, 463.741}, {{2012, 9, 4}, 294.667, 252.019}, {{2012, 9, 5},
490.537, 454.574}, {{2012, 9, 6}, 397.444, 387.185}, {{2012, 9, 7},
542.981, 530.333}, {{2012, 9, 8}, 586.278, 571.222}, {{2012, 9, 9},
570.037, 555.5}, {{2012, 9, 10}, 536.056, 527.556}, {{2012, 9, 11},
531.815, 516.37}, {{2012, 9, 12}, 352.296, 324.222}, {{2012, 9, 13},
111.704, 110.407}, {{2012, 9, 14}, 213.148,
201.278}, {{2012, 9, 15}, 433.611, 373.389}, {{2012, 9, 16},
349.463, 328.481}, {{2012, 9, 17}, 535.296,
522.037}, {{2012, 9, 18}, 526.5, 510.759}, {{2012, 9, 19}, 97.9259,
105.074}, {{2012, 9, 20}, 486.704, 477.944}, {{2012, 9, 21},
492.204, 486.}, {{2012, 9, 22}, 449.148, 419.241}, {{2012, 9, 23},
451.167, 444.407}, {{2012, 9, 24}, 352.333,
345.111}, {{2012, 9, 25}, 477.574, 467.444}, {{2012, 9, 26},
234.944, 197.093}, {{2012, 9, 27}, 218.5, 199.5}, {{2012, 9, 28},
429.481, 415.37}, {{2012, 9, 29}, 117.63, 114.185}, {{2012, 9, 30},
79.0556, 71.6852}, {{2012, 10, 1}, 427.167,
353.704}, {{2012, 10, 2}, 141.87, 143.519}, {{2012, 10, 3}, 466.907,
450.796}, {{2012, 10, 4}, 415.796, 391.648}, {{2012, 10, 5},
282.889, 267.333}, {{2012, 10, 6}, 436.852,
422.574}, {{2012, 10, 7}, 326.667, 322.519}, {{2012, 10, 8},
423.019, 398.463}, {{2012, 10, 9}, 290.444,
268.87}, {{2012, 10, 10}, 61.5185, 65.3519}, {{2012, 10, 11},
117.093, 104.667}, {{2012, 10, 12}, 59.8889,
55.463}, {{2012, 10, 13}, 86.0556, 74.463}, {{2012, 10, 14}, 331.87,
389.111}, {{2012, 10, 15}, 324.926, 332.148}, {{2012, 10, 16},
65.7778, 61.5185}, {{2012, 10, 17}, 360.574,
336.389}, {{2012, 10, 18}, 373.741, 363.093}, {{2012, 10, 19},
402.833, 386.87}, {{2012, 10, 20}, 379.204,
355.463}, {{2012, 10, 21}, 370.204, 356.889}, {{2012, 10, 22},
353.056, 338.537}, {{2012, 10, 23}, 347.074,
306.259}, {{2012, 10, 24}, 52.2407, 49.4815}, {{2012, 10, 25},
42.537, 40.7778}, {{2012, 10, 26}, 46.5556,
40.1852}, {{2012, 10, 27}, 91.4074, 95.4259}, {{2012, 10, 28},
118.37, 107.704}, {{2012, 10, 29}, 231.87,
217.667}, {{2012, 10, 30}, 246.722, 235.389}, {{2012, 10, 31},
318.611, 305.667}, {{2012, 11, 1}, 62.963, 71.3704}, {{2012, 11, 2},
56.0741, 47.2407}, {{2012, 11, 3}, 125.704,
104.574}, {{2012, 11, 4}, 177.778, 159.241}, {{2012, 11, 5},
29.6481, 28.1667}, {{2012, 11, 6}, 153.13, 133.833}, {{2012, 11, 7},
220.796, 221.648}, {{2012, 11, 8}, 275.444,
265.704}, {{2012, 11, 9}, 260.204, 256.852}, {{2012, 11, 10},
129.37, 109.704}, {{2012, 11, 11}, 212.352,
203.722}, {{2012, 11, 12}, 17.4259, 18.2963}, {{2012, 11, 13},
70.8889, 70.7037}, {{2012, 11, 14}, 87.0556,
97.5185}, {{2012, 11, 15}, 210.056, 197.167}, {{2012, 11, 16},
59.3704, 59.6296}

TimeSeriesAggregate can be used to get the monthly means and TimeSeriesMapThread to divide the daily values by their month's mean. I assigned your values to a variable dat.
First we create a TimeSeries. I noticed that your data does not begin on the 1st of the month. TimeSeriesAggreate will pick monthly windows starting on the day of the first day. I added a 1 May 2012 value to have the months in the aggregation start on the 1st of the month.
ts = TimeSeriesInsert[
TimeSeries[{#[[1]], #[[2 ;; 3]]} & /# dat], {{2012, 5, 1}, dat[[1, 2 ;; 3]]}];
Use ts to calculate the monthly averages. I take the "DatePath" for the mapping function.
mthTs = TimeSeriesAggregate[ts, {"Month", Left}]["DatePath"];
The mapping function will take both the date and the values to decide which set of means to standardise by.
stdByMean[date_, values_] :=
values/mthTs[[LengthWhile[mthTs, DateObject#date >= First## &], 2]]
stdByMean is threaded through the time series to produce the desired result.
TimeSeriesMapThread[stdByMean, ts]
Keep in mind that in your data the last month is not a full month so the standardisation is not strictly correct.
Hope this helps.

{DateObject[#[[1, 1, 1 ;; 2]]],
Mean[Flatten[#[[All, 2 ;; 3]]]]} & /#
GatherBy[data, #[[1, 1 ;; 2]] &]
DateListPlot#%
Leave out the Flatten if you wanted separate means of the two values for each day.
edit.. using TimeSeriesAggregate
DateListPlot[TimeSeriesAggregate[{#[[1]],
Mean[#[[2 ;; 3]] ]} & /# data, "Month"]]
The result is slightly different because TimeSeriesAggregate is averaging over month-long intervals, but not necessarily aligned with the calendar months.

Related

Creating a nested dictionary comprehension for year and month in python

I would like to create a nested dictionary with dict comprehension but I am getting syntax error.
years = [2016, 2017, 2018, 2019]
months = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
my_Dict = {i:{j: for j in months}, for i in years}
I am not sure how to declare this nested dict comprehension without getting a syntax error.
In this case, the correct way would be to use a dictionary with a nested list comprehension because if you use a nested dictionary, it will replace old values. The correct syntax, in this case, would be this one:
years = [2016, 2017, 2018, 2019]
months = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
my_Dict = {
year: [month for month in months] for year in years
}
print(my_Dict)
>>> {2016: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
2017: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
2018: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
2019: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]}
Made some slight changes to the code above and this works
key_years = {2016, 2017, 2018, 2019}
key_months = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
myDict = {i:{j for j in key_months} for i in key_years}
print (myDict)
Output: {2016: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}, 2017: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}, 2018: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}, 2019: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}}

Encoding of list of sum of 2 dice that has better compression. (byte restricted)

Imagine any game in which two six-sided dice are used.
It is needed to store the history of the game, we want to store the sums resulting from rolling the dice in the whole game.
In traditional Huffman enconding, 7 has bigger probability, so , it is encoded in 3 bits. 2 and 12 need 5 bits.
In this case, one symbol is encoded in variable code size.
However, I'm trying to figure out an enconding in which a single byte (8 bits) encode a different sequence of sum of dice.
So, in this case, the code size is constant (8 bits) , but the number of symbols is variable. Naive Example:
0x00 = {2}
0x01 = {3}
...
0x0A = {12}
0x0B = {2,2}
0x0C = {2,3}
0x0D = {2,4} etc.
So, the decoder can read byte by byte. Therefore each byte is independent of the other.
How to find the mapping that has the better compression?
Can you point to some algorithm that solves this case of compression?
My thoughts about this is:
Sequence of 1 sum can be assigned from 0x00 to 0x0A (from 2 to 12).
I can split the sequence {7} into: {7,1} , {7,2} ... {7,12} and assign values for these sequences.
If I do this for the whole list of {7,x}, then, I could remove {7} from the 1 sum values (because any sequence which starts by 7 is reachable by using the 2 sum sequences).
So, the resulting encoding would be:
{2} - {6}
{8} - {12}
{7,2} - {7,12}
Then, for example, I think: {6,6} , {6,7} or {6,8} could provide more "value" (bigger probability) than {7,2} or {7,12}.
But, if I remove {7,2} or {7,12}, then, I should return {7} to the list (otherwise, {7,2} could not be expressed).
Something like this:
{2} - {12}
{7,3} - {7,11}
{6,6} - {6,8}
So, there should be some kind of "trade-off" in this problem.
Here's a solution that I think achieves rate approximately 7.733629 bits per byte. (Generating code in Python 3, if you want to play with it: https://github.com/eisenstatdavid/huffman/blob/master/huffman.py) My algorithm is some EMish thing that alternately (1) computes the stationary distribution of the first roll in a byte (2) chooses the most 256 probable words subject to the constraint that we can encode any infinite sequence. I would conjecture optimality, though I know only that this solution is a local maximum (and even then, assuming my code has no bugs, etc.).
{{2}, {12}, {7, 7}, {6, 7}, {8, 7}, {5, 7}, {9, 7}, {4, 7}, {10,
7}, {7, 6}, {7, 8}, {6, 6}, {6, 8}, {8, 6}, {8, 8}, {5, 6}, {5, 8},
{9, 6}, {9, 8}, {4, 6}, {4, 8}, {10, 6}, {10, 8}, {7, 5}, {7, 9},
{6, 5}, {6, 9}, {8, 5}, {8, 9}, {5, 5}, {5, 9}, {9, 5}, {9, 9}, {3,
7}, {11, 7}, {4, 5}, {4, 9}, {10, 5}, {10, 9}, {7, 4}, {7, 10}, {3,
6}, {3, 8}, {11, 6}, {11, 8}, {6, 4}, {6, 10}, {8, 4}, {8, 10}, {5,
4}, {5, 10}, {9, 4}, {9, 10}, {4, 4}, {4, 10}, {10, 4}, {10, 10},
{3, 5}, {3, 9}, {11, 5}, {11, 9}, {7, 3}, {7, 11}, {2, 7}, {12, 7},
{6, 3}, {6, 11}, {8, 3}, {8, 11}, {5, 3}, {5, 11}, {9, 3}, {9, 11},
{3, 4}, {3, 10}, {11, 4}, {11, 10}, {4, 3}, {4, 11}, {10, 3}, {10,
11}, {2, 6}, {2, 8}, {12, 6}, {12, 8}, {2, 5}, {2, 9}, {12, 5},
{12, 9}, {3, 3}, {3, 11}, {11, 3}, {11, 11}, {7, 2}, {7, 12}, {7,
7, 7}, {2, 4}, {2, 10}, {12, 4}, {12, 10}, {6, 2}, {6, 12}, {8, 2},
{8, 12}, {6, 7, 7}, {8, 7, 7}, {5, 2}, {5, 12}, {9, 2}, {9, 12},
{5, 7, 7}, {9, 7, 7}, {4, 2}, {4, 12}, {10, 2}, {10, 12}, {4, 7,
7}, {10, 7, 7}, {7, 6, 7}, {7, 7, 6}, {7, 7, 8}, {7, 8, 7}, {6, 6,
7}, {6, 8, 7}, {8, 6, 7}, {8, 8, 7}, {6, 7, 6}, {6, 7, 8}, {8, 7,
6}, {8, 7, 8}, {5, 6, 7}, {5, 7, 6}, {5, 7, 8}, {5, 8, 7}, {9, 6,
7}, {9, 7, 6}, {9, 7, 8}, {9, 8, 7}, {4, 6, 7}, {4, 7, 6}, {4, 7,
8}, {4, 8, 7}, {10, 6, 7}, {10, 7, 6}, {10, 7, 8}, {10, 8, 7}, {7,
6, 6}, {7, 6, 8}, {7, 8, 6}, {7, 8, 8}, {7, 5, 7}, {7, 7, 5}, {7,
7, 9}, {7, 9, 7}, {6, 6, 6}, {6, 6, 8}, {6, 8, 6}, {6, 8, 8}, {8,
6, 6}, {8, 6, 8}, {8, 8, 6}, {8, 8, 8}, {5, 6, 6}, {5, 6, 8}, {5,
8, 6}, {5, 8, 8}, {9, 6, 6}, {9, 6, 8}, {9, 8, 6}, {9, 8, 8}, {2,
3}, {2, 11}, {12, 3}, {12, 11}, {6, 5, 7}, {6, 7, 5}, {6, 7, 9},
{6, 9, 7}, {8, 5, 7}, {8, 7, 5}, {8, 7, 9}, {8, 9, 7}, {5, 5, 7},
{5, 7, 5}, {5, 7, 9}, {5, 9, 7}, {9, 5, 7}, {9, 7, 5}, {9, 7, 9},
{9, 9, 7}, {4, 6, 6}, {4, 6, 8}, {4, 8, 6}, {4, 8, 8}, {10, 6, 6},
{10, 6, 8}, {10, 8, 6}, {10, 8, 8}, {3, 2}, {3, 12}, {11, 2}, {11,
12}, {3, 7, 7}, {11, 7, 7}, {4, 5, 7}, {4, 7, 5}, {4, 7, 9}, {4,
9, 7}, {10, 5, 7}, {10, 7, 5}, {10, 7, 9}, {10, 9, 7}, {7, 5, 6},
{7, 5, 8}, {7, 9, 6}, {7, 9, 8}, {7, 6, 5}, {7, 6, 9}, {7, 8, 5},
{7, 8, 9}, {6, 6, 5}, {6, 6, 9}, {6, 8, 5}, {6, 8, 9}, {8, 6, 5},
{8, 6, 9}, {8, 8, 5}, {8, 8, 9}, {6, 5, 6}, {6, 5, 8}, {6, 9, 6},
{6, 9, 8}, {8, 5, 6}, {8, 5, 8}, {8, 9, 6}, {8, 9, 8}, {5, 5, 6},
{5, 5, 8}, {5, 6, 5}, {5, 6, 9}, {5, 8, 5}, {5, 8, 9}, {5, 9, 6},
{5, 9, 8}, {9, 5, 6}, {9, 5, 8}, {9, 6, 5}, {9, 6, 9}, {9, 8, 5},
{9, 8, 9}, {9, 9, 6}, {9, 9, 8}, {7, 4, 7}, {7, 7, 4}, {7, 7, 10},
{7, 10, 7}}
There's a simpler, suboptimal solution that packs about 7.438148 bits of entropy into a byte. The 251 codewords are all length-3 sequences that start with {5, 7}, {6, 6}, {6, 7}, {6, 8}, {7, 5}, {7, 6}, {7, 7}, {7, 8}, {7, 9}, {8, 6}, {8, 7}, {8, 8}, {9, 7}, plus all length-2 sequences that don't start with one of those prefixes.
Chart of whether to take the third roll given the first two:
2 3 4 5 6 7 8 9 10 11 12
2 - - - - - - - - - - -
3 - - - - - - - - - - -
4 - - - - - - - - - - -
5 - - - - - X - - - - -
6 - - - - X X X - - - -
7 - - - X X X X X - - -
8 - - - - X X X - - - -
9 - - - - - X - - - - -
10 - - - - - - - - - - -
11 - - - - - - - - - - -
12 - - - - - - - - - - -
It's hard to analyze solutions where the encoder might or might not pack the next roll depending on what it is -- the probability distribution next time is affected.
Assuming you are interested in the sum, not in the sequence:
Variable length: Huffman
{2: 01110, 3: 0110, 4: 1100, 5: 000, 6: 001, 7: 010, 8: 100, 9: 101, 10: 111, 11: 1101, 12: 01111};
For fixed length: look for Tunstall coding

find modular multiplicative inverse

is it possible to find the solution of following: 9^(-1)%M ie inverse of 9 modulo M where 2<=M<=10^9 it may not be a prime and gcd(9,M) may not be 1 if not possible to find such a solution is there any method to solve: ((10^n-1)/9)%M 1<=n<=10^16
Since this is tagged wolfram-mathematica I assume you are asking in the context of Mathematica, in which case there is a built-in function to do this:
PowerMod[9,-1,m]
This will give you the inverse of 9, modulo m, for whatever value of m you want.
Table[PowerMod[9,-1,m],{m,2,1000}]
will produce:
PowerMod::ninv: 0 is not invertible modulo 3. >>
PowerMod::ninv: 3 is not invertible modulo 6. >>
PowerMod::ninv: 0 is not invertible modulo 9. >>
General::stop: Further output of PowerMod::ninv will be
suppressed during this calculation. >>
{1, PowerMod[9, -1, 3], 1, 4, PowerMod[9, -1, 6], 4, 1,
PowerMod[9, -1, 9], 9, 5, PowerMod[9, -1, 12], 3, 11,
PowerMod[9, -1, 15], 9, 2, PowerMod[9, -1, 18], 17, 9,
PowerMod[9, -1, 21], 5, 18, PowerMod[9, -1, 24], 14, 3,
PowerMod[9, -1, 27], 25, 13, PowerMod[9, -1, 30], 7, 25,
PowerMod[9, -1, 33], 19, 4, PowerMod[9, -1, 36], 33, 17,
PowerMod[9, -1, 39], 9, 32, PowerMod[9, -1, 42], 24, 5,
PowerMod[9, -1, 45], 41, 21, PowerMod[9, -1, 48], 11, 39,
PowerMod[9, -1, 51], 29, 6, PowerMod[9, -1, 54], 49, 25,
PowerMod[9, -1, 57], 13, 46, PowerMod[9, -1, 60], 34, 7,
PowerMod[9, -1, 63], 57, 29, PowerMod[9, -1, 66], 15, 53,
PowerMod[9, -1, 69], 39, 8, PowerMod[9, -1, 72], 65, 33,
PowerMod[9, -1, 75], 17, 60, PowerMod[9, -1, 78], 44, 9,
PowerMod[9, -1, 81], 73, 37, PowerMod[9, -1, 84], 19, 67,
PowerMod[9, -1, 87], 49, 10, PowerMod[9, -1, 90], 81, 41,
PowerMod[9, -1, 93], 21, 74, PowerMod[9, -1, 96], 54, 11,
PowerMod[9, -1, 99], 89}
You can get rid of the invalid output from that list with:
Select[%, IntegerQ]
which gives:
{1, 1, 4, 4, 1, 9, 5, 3, 11, 9, 2, 17, 9, 5, 18, 14, 3, 25, 13, 7,
25, 19, 4, 33, 17, 9, 32, 24, 5, 41, 21, 11, 39, 29, 6, 49, 25, 13,
46, 34, 7, 57, 29, 15, 53, 39, 8, 65, 33, 17, 60, 44, 9, 73, 37, 19,
67, 49, 10, 81, 41, 21, 74, 54, 11, 89}
If you want to think smart, you can also organize this table better by skipping the elements that are not coprime with 9. One way to achieve this using more advanced Mathematica syntax would be:
Map[{#, PowerMod[9, -1, m]} &, Select[Range[100], GCD[#, 9] == 1 &]]
which gives you output:
{{2, 1}, {4, 1}, {5, 4}, {7, 4}, {8, 1}, {10, 9}, {11, 5}, {13, 3},
{14, 11}, {16, 9}, {17, 2}, {19, 17}, {20, 9}, {22, 5}, {23, 18},
{25, 14}, {26, 3}, {28, 25}, {29, 13}, {31, 7}, {32, 25}, {34, 19},
{35, 4}, {37, 33}, {38, 17}, {40, 9}, {41, 32}, {43, 24}, {44, 5},
{46, 41}, {47, 21}, {49, 11}, {50, 39}, {52, 29}, {53, 6}, {55, 49},
{56, 25}, {58, 13}, {59, 46}, {61, 34}, {62, 7}, {64, 57}, {65, 29},
{67, 15}, {68, 53}, {70, 39}, {71, 8}, {73, 65}, {74, 33}, {76, 17},
{77, 60}, {79, 44}, {80, 9}, {82, 73}, {83, 37}, {85, 19}, {86, 67},
{88, 49}, {89, 10}, {91, 81}, {92, 41}, {94, 21}, {95, 74}, {97, 54},
{98, 11}, {100, 89}}
That gives you a list of pairs, and each pair has the m value and then the inverse of 9 modulo that m value. For example, {61,34} is on that list, which means 9*34 is 1 mod 61. You can check that this is right.

n'th biggest number in a multi-dimensional list in Mathematica

Imagine I have a 2D list of numbers in Mathematica :
myList = Table[{i,i*j},{i,1,10},{j,1,10}];
and I want to retrieve the 5th highest values in an efficient way. Using RankedMax gives an error. For example,
Max[myList]
gives 100 but:
RankedMax[myList,1]
gives :
RankedMax::vec : "Input {{{1, 1}, {1, 2}, {1, 3}, {1, 4}, {1, 5}, {1, 6}, \
{1, 7}, {1, 8}, {1, 9}, {1, 10}}, {{2, 2}, {2, 4}, {2, 6}, {2, 8}, {2, 10}, \
{2, 12}, {2, 14}, {2, 16}, {2, 18}, {2, 20}}, 6, {{9, 9}, {9, 18}, {9, 27}, \
{9, 36}, {9, 45}, {9, 54}, {9, 63}, {9, 72}, {9, 81}, {9, 90}}, {{10, 10}, \
{10, 20}, {10, 30}, {10, 40}, {10, 50}, {10, 60}, {10, 70}, {10, 80}, {10, \
90}, {10, 100}}} is not a vector
How do I use RankedMax on my data or is there any other way around ?
Use Flatten
RankedMax[Flatten#myList,1]
This is fine if he's just looking for the fifth biggest of all the numbers in the table. If, as I suspect, he's looking for the fifth biggest of the calculated terms -- the second element in each pair -- we should slightly amend the previous solution to read:
RankedMax[Flatten#Map[Rest, myList, {2}], 5]

Sort all levels of expression

What's a good way to Sort all levels of an expression? The following does what I want when expression has rectangular structure, but I'd like it to work for non-rectangular expressions as well
Map[Sort, {expr}, Depth[expr] - 1]
For instance, the following should print True
sorted = deepSort[{{{1, 3, 8}, {3, 7, 6}, {10, 4, 9}, {3, 8, 10,
6}, {8, 2, 5, 10}, {8, 5, 10,
9}}, {{{1, 3, 8}, {3, 8, 10, 6}}, {{3, 7, 6}, {3, 8, 10,
6}}, {{10, 4, 9}, {8, 5, 10, 9}}, {{3, 8, 10, 6}, {8, 2, 5,
10}}, {{8, 2, 5, 10}, {8, 5, 10, 9}}}}];
checkSortedLevel[k_] := Map[OrderedQ, sorted, {k}];
And ## Flatten[checkSortedLevel /# Range[0, 2]]
deepSort[expr_] := Map[Sort, expr, {0, -2}]
Note that this will work even if your expr contains heads other than List.
Should you have an expression that contains heads other than List, and you do not want to sort those, this may be useful.
expr /. List :> Composition[Sort, List]

Resources