Level and experience algorithm - algorithm

Can we get either the level up XP or total XP from a closed formula? If so, what formula?
This is from gambling site i have found and the "Daily" is money you can get every 24hours
Is there any algorithm I can follow to get something like that? Thank you for your ideas

Level n total xp = 30 * (n-1)^4
2: 30 * 1^4 = 30
3: 30 * 2^4 = 480
...
40: 30 * 39^4 = 69,403,230
Found via prime factorization.
$ factor 69403230
69403230: 2 3 3 3 3 3 5 13 13 13 13
$ factor 62554080
62554080: 2 2 2 2 2 3 5 19 19 19 19

Related

Degrees of Freedom in SAS Proc MIXED

Below is a SAS proc Mixed code generated by JMP. Both JMP and SAS give me a confidence interval for the variance components (in the table "covariance parameter estimates"). I would like the degrees of freedom in the output, or at least a formula to calculate them. Can anyone tell me where that would be?
DATA Untitled; INPUT x y Batch &$; Lines;
0 107.2109269 4
3 106.3777088 4
6 103.8625117 4
9 103.7524023 4
12 101.6895595 4
18 104.4145268 4
24 100.6606813 4
0 107.6603635 5
3 105.9161433 5
6 106.0260339 5
9 104.660272 5
12 102.5820776 5
18 103.7961511 5
24 101.2887124 5
0 109.2066284 6
3 106.9341754 6
6 106.6141445 6
9 106.8234541 6
12 104.7778902 6
18 106.0184734 6
24 102.9822743 6
;
RUN;
PROC MIXED ASYCOV NOBOUND DATA=Untitled ALPHA=0.05;
CLASS Batch;
MODEL y = x/ SOLUTION DDFM=KENWARDROGER;
RANDOM Batch / SOLUTION ;
RUN;

How to set the arguments of tf.extract_image_patches

I want to extract image patches from the input image in my tensorflow model.
Let's say the input image is [batch, in_width, in_height, channels], I want to output [no_patches, patch_width, patch_height, channels]. no_patches are the total number of patches can be extracted from the input_image.
I found out that tf.extract_image_patches can do the job.
However, I don't understand the difference of the arguments strides and rates.
Can someone explain how to use the above function to do the work?
strides is about the movement of the window on your data.
rates is about how 'spread out' the window is.
For instance, if you use strides = [1,5,5,1] your window jumps by 5 pixels in both the 1st and 2nd dimension. If you use rates = [1,1,1,1] your window is 'compact', meaning that all the pixels are contiguous. If you use rates = [1,1,2,1], then your window spreads out in the 2nd dimension and takes a pixel every 2.
Example with ksizes = [1,3,2,1] (ignore strides for now): on the left we use , rates = [1,1,1,1], in the middle we use rates = [1,1,2,1], on the right we use rates = [1,2,2,1] :
* * 3 4 5 * 2 * 4 5 * 2 * 4 5
* * 8 9 10 * 7 * 9 10 6 7 8 9 10
* * 13 14 15 * 12 * 14 15 * 12 * 14 15
16 17 18 19 20 16 17 18 19 20 16 17 18 19 20
21 22 23 24 25 21 22 23 24 25 * 22 * 24 25

How can you improve computation time when predicting KNN Imputation?

I feel like my run time is extremely slow for my data set, this is the code:
library(caret)
library(data.table)
knnImputeValues <- preProcess(mainData[trainingRows, imputeColumns], method = c("zv", "knnImpute"))
knnTransformed <- predict(knnImputeValues, mainData[ 1:1000, imputeColumns])
the PreProcess into knnImputeValues run's fairly quickly, however the predict function takes a tremendous amount of time. When I calculated it on a subset of the data this was the result:
testtime <- system.time(knnTransformed <- predict(knnImputeValues, mainData[ 1:15000, imputeColumns
testtime
user 969.78
system 38.70
elapsed 1010.72
Additionally, it should be noted that caret preprocess uses "RANN".
Now my full dataset is:
str(mainData[ , imputeColumns])
'data.frame': 1809032 obs. of 16 variables:
$ V1: int 3 5 5 4 4 4 3 4 3 3 ...
$ V2: Factor w/ 3 levels "1000000","1500000",..: 1 1 3 1 1 1 1 3 1 1 ...
$ V3: Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ...
$ V4: int 2 5 5 12 4 5 11 8 7 8 ...
$ V5: int 2 0 0 2 0 0 1 3 2 8 ...
$ V6: int 648 489 489 472 472 472 497 642 696 696 ...
$ V7: Factor w/ 4 levels "","N","U","Y": 4 1 1 1 1 1 1 1 1 1 ...
$ V8: int 0 0 0 0 0 0 0 1 1 1 ...
$ V9: num 0 0 0 0 0 ...
$ V10: Factor w/ 56 levels "1","2","3","4",..: 45 19 19 19 19 19 19 46 46 46 ...
$ V11: Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ...
$ V12: num 2 5 5 12 4 5 11 8 7 8 ...
$ V13: num 2 0 0 2 0 0 1 3 2 8 ...
$ V14: Factor w/ 4 levels "1","2","3","4": 2 2 2 2 2 2 2 2 3 3 ...
$ V15: Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 2 2 2 ...
$ V16: num 657 756 756 756 756 ...
So is there something I'm doing wrong, or is this typical for how long it will take to run this? If you back of the envelop extrapolate (which I know isn't entire accurate) you'd get what 33 days?
Also it looks like system time is very low and user time is very high, is that normal?
My computer is a laptop, with a Intel(R) Core(TM) i5-6300U CPU # 2.40Ghz processor.
Additionally would this improve the runtime of the predict function?
cl <- makeCluster(4)
registerDoParallel()
I tried it, and it didn't seem to make a difference other than all the processors looked more active in my task manager.
FOCUSED QUESTION: I'm using Caret package to do KNN Imputation on 1.8 Million Rows, the way I'm currently doing it will take over a month to run, how do I write this in such a way that I could do it in a much faster amount of time(if possible)?
Thank you for any help provided. And the answer might very well be "that's how long it takes don't bother" I just want to rule out any possible mistakes.
You can speed this up via the imputation package and use of canopies which can be installed from Github:
Sys.setenv("PKG_CXXFLAGS"="-std=c++0x")
devtools::install_github("alexwhitworth/imputation")
Canopies use a cheap distance metric--in this case distance from the data mean vector--to get approximate neighbors. In general, we wish to keep the canopies each sized < 100k so for 1.8M rows, we'll use 20 canopies:
library("imputation")
to_impute <- mainData[trainingRows, imputeColumns] ## OP undefined
imputed <- kNN_impute(to_impute, k= 10, q= 2, verbose= TRUE,
parallel= TRUE, n_canopies= 20)
NOTE:
The imputation package requires numeric data inputs. You have several factor variables in your str output. They will cause this to fail.
You'll also get some mean vector imputation if you have fulling missing rows.
# note this example data is too small for canopies to be useful
# meant solely to illustrate
set.seed(2143L)
x1 <- matrix(rnorm(1000), 100, 10)
x1[sample(1:1000, size= 50, replace= FALSE)] <- NA
x_imp <- kNN_impute(x1, k=5, q=2, n_canopies= 10)
sum(is.na(x_imp[[1]])) # 0
# with fully missing rows
x2 <- x1; x2[5,] <- NA
x_imp <- kNN_impute(x2, k=5, q=2, n_canopies= 10)
[1] "Computing canopies kNN solution provided within canopies"
[1] "Canopies complete... calculating kNN."
row(s) 1 are entirely missing.
These row(s)' values will be imputed to column means.
Warning message:
In FUN(X[[i]], ...) :
Rows with entirely missing values imputed to column means.

Looking for failing test case to DP solution to MARTIAN on SPOJ

I am trying to solve the MARTIAN problem on SPOJ
My algorithm is as follows:
Define dp[i][j]=max amount of minerals that can be mined in the rectangle form 0,0 to i,j.
Use the recurrence
dp[i][j] = max(dp[i-1][j] + total amount of yeyenum
in the i-th row up to the j-th column,
dp[i][j-1] + total amount of bloggium
in the j-th column up to the cell i-th row)
However such an approach yields a WA (Wrong Answer). Can someone please provide me with a test case where such and approach will not work?
I am not looking for the correct algorithm just a test case where this approach fails as. I've been unable to find the bug myself.
Try this on your code(modified from the example given):
4 4
0 0 10 60
1 3 10 0
4 2 1 3
1 1 20 0
10 0 0 0
1 1 1 10
0 0 5 3
5 10 10 10
0 0
If you start by looking at [4][4], you'll choose Bloggium, because you can get 23 bloggium by going up, and only 22 Yeyenum from going left. However, you're going to miss a huge amount of Yeyenum.
Using your algorithm, you'll get 23 + 22 + 7 + 14 + 10 = 76.
If you choose the large Yeyenum, you'll get 70 + 14 + 10 + 22 = 116(all Yeyenum, since the bloggium gets blocked).

Sum Pyramid with backtracking

I'm trying to solve this problem and I'm new to backtracking algorithms,
The problem is about making a pyramid like this so that a number sitting on two numbers is the sum of them. Every number in the pyramid has to be different and less than 100. Like this:
88
39 49
15 24 25
4 11 13 12
1 3 8 5 7
Any pointers on how to do this using backtracking?
Not necessarily backtracking but the property you are asking for is interestingly very similar to the Pascal Triangle property.
The Pascal Triangle (http://en.wikipedia.org/wiki/Pascal's_triangle), which is used for efficient computation of binomial coefficient among other things, is a pyramid where a number is equal to the sum of the two numbers above it with the top being 1.
As you can see you are asking the opposite property where a number is the sum of the numbers below it.
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
1 7 21 35 35 21 7 1
1 8 28 56 70 56 28 8 1
For instance in the Pascal Triangle above, if you wanted the top of your pyramid to be 56, your pyramid will be a reconstruction bottom up of the Pascal Triangle starting from 56 and that will give something like:
56
21 35
6 15 20
1 5 10 10
Again that's not a backtracking solution and this might not give you a good enough solution for every single N though I thought this was an interesting approximation that was worth noting.

Resources