I need to add the sorting functionality for the specific column which contains both positive and negative numbers.
By default, sorting is working as expected. But in my case, i need to sort the positive values only.
Column: Percent
Values: -1% 16% 2% 12% 0%
Expected output:
Ascending order: 2% 12% 16% 0% -1%
Descending order: 16% 12% 2% 0% -1%
Any ideas how to do sorting like this?
You would need to create a custom sorting plugin that does that yourself. You could do as the example below :
function ignoreZeroOrBelow(a, b, high) {
a = parseFloat(a);
a = a>0 ? a : high;
b = parseFloat(b);
b = b>0 ? b : high;
return ((a < b) ? -1 : ((a > b) ? 1 : 0));
}
jQuery.extend( jQuery.fn.dataTableExt.oSort, {
"sort-positive-numbers-only-asc": function (a, b) {
return ignoreZeroOrBelow(a, b, Number.POSITIVE_INFINITY);
},
"sort-positive-numbers-only-desc": function (a, b) {
return ignoreZeroOrBelow(a, b, Number.NEGATIVE_INFINITY)*-1;
}
});
Usage :
var dataTable = $('#example').dataTable({
columnDefs: [
{ type: 'sort-positive-numbers-only', targets : 0 }
],
});
demo -> http://jsfiddle.net/scuo0t6k/
The idea behind the plugin is very simple
extract the number value of the content (-1% == -1)
when sorting
upon ascending return Number.POSITIVE_INFINITY if the value<=0
upon descending return Number.NEGATIVE_INFINITY if the value<=0
By that only values>0 are included when you sort the column.
Related
Say, we have 100+ participants and 3 winning places. I need to schedule as least matches as possible to find 3 winners. The rest of places doesn't matter at all.
Round-robin algorithm looks unnecessary expensive.
Here is my solution:
const match = (a, b) => {
if (!a || !b) {
return {winner: a || b}
}
// simulate match
const winner = Math.random() > 0.5 ? a : b
console.log(`Match between ${a} and ${b}: ${winner} wins`)
return {winner, looser: winner === a ? b : a}
}
let participants = {
// [id]: {win: Number, loose: Number}
}
let participantsNumber = 100
let n = 0
// create random participants
while(n < participantsNumber) {
n++
participants[String(n)] = {win: 0, loose: 0}
}
let round = 0
while(participantsNumber > 3) {
let odd = true
let matches = []
_.map(participants, (winLooseStats, id) => {
if (odd) {
odd = false
matches.push({player_1: id})
} else {
odd = true
let opponentFound = false
matches.map(match => {
if (!match.player_2 && !opponentFound) {
opponentFound = true
match.player_2 = id
}
})
}
})
console.log('matches', matches)
// run matches
matches.forEach(({player_1, player_2}) => {
const {winner, looser} = match(player_1, player_2)
participants[winner].win++
if (looser) {
participants[looser].loose++
}
})
// remove those, who has lost 3 times
_.map(participants, ({win, loose}, id) => {
if (loose > 2) {
console.log(`Player ${id} has lose 3 times`)
delete participants[id]
participantsNumber--
}
})
round++
console.log(`Round ${round} complete. ${participantsNumber} players left`)
}
console.log(`3 champions: ${_.map(participants, (wl, id) => id).join(', ')}`)
JSFIDDLE
~12 rounds per 100 participants. Is it possible to decrease number of rounds?
The question is a bit vague, but I think your rules are as follows:
Any pair of players plays no more than one match. If A beats B we assume A will always beat B.
We assume a transitive property: if A beats B and B beats C the we can assume A beats C and we don't have to schedule a match to find that out.
Assuming that's correct and you have n players then you can solve this optimally using a standard single elimination tournament to find the winner and 2nd place. To find 3rd place you have to add one more match between the two people who didn't make it to the final match.
Let's say I have a square boolean grid (2D array) of size N. Some of the values are true and some are false (the <true values> / <false values> ratio is unspecified). I want to randomly choose an indice (x, y) so that grid[x][y] is true. If I wanted a time-efficient solution, I'd do something like this (Python):
x, y = random.choice([(x, y) for x in range(N) for y in range(N) if grid[x][y]])
But this is O(N^2), which is more than sufficient for, say, a tic-tac-toe game implementation, but I'm guessing it would get much more memory-consuming for large N.
If I wanted something that's not memory consuming, I'd do:
x, y = 0, 0
t = N - 1
while True:
x = random.randint(0, t)
y = random.randint(0, t)
if grid[x][y]:
break
But the issue is, if I have a grid of size of order 10^4 and there is only one or two true values in it, it could take forever to "guess" which indice is the one I'm interested in. How should I go about making this algorithm optimal?
If the grid is static or doesn't change much, or you have time to do some preprocessing, you could store an array that holds the number of true values per row, the total number of true values, and a list of the non-zero rows (all of which you could keep updated if the grid changes):
grid per row
0 1 0 0 1 0 2
0 0 0 0 0 0 0
0 0 1 0 0 0 1
0 0 0 0 1 0 1
0 0 0 0 0 0 0
1 0 1 1 1 0 4
total = 8
non-zero rows: [0, 2, 3, 5]
To select a random index, choose a random value r up to the total number of true values, iterate over the array with the number of true values per non-zero row, adding them up until you know what row the r-th true value is in, and then iterate over that row to find the location of the r-th true value.
(You could simply pick a non-empty row first, and then pick a true value from that row, but that would create non-uniform probabilities.)
For an N×N-sized grid, the pre-processing would take N×N time and 2×N space, but the worst case look-up time would be N. In practice, using the JavaScript code example below, the pre-processing and look-up times (in ms) are in the order of:
grid size pre-processing look-up
10000 x 10000 5000 2.2
1000 x 1000 50 0.22
100 x 100 0.5 0.022
As you can see, look-up is more than 2000 times faster than pre-processing for a large grid, so if you need to randomly select several positions on the same (or slightly altered) grid, pre-processing makes a lot of sense.
function random2D(grid) {
this.grid = grid;
this.num = this.grid.map(function(elem) { // number of true values per row
return elem.reduce(function(sum, val) {
return sum + (val ? 1 : 0);
}, 0);
});
this.total = this.num.reduce(function(sum, val) { // total number of true values
return sum + val;
}, 0);
this.update = function(row, col, val) { // change value in grid
var prev = this.grid[row][col];
this.grid[row][col] = val;
if (prev ^ val) {
this.num[row] += val ? 1 : -1;
this.total += val ? 1 : -1;
}
}
this.select = function() { // select random index
var row = 0, col = 0;
var rnd = Math.floor(Math.random() * this.total) + 1;
while (rnd > this.num[row]) { // find row
rnd -= this.num[row++];
}
while (rnd) { // find column
if (this.grid[row][col]) --rnd;
if (rnd) ++col;
}
return {x: col, y: row};
}
}
var grid = [], size = 1000, prob = 0.01; // generate test data
for (var i = 0; i < size; i++) {
grid[i] = [];
for (var j = 0; j < size; j++) {
grid[i][j] = Math.random() < prob;
}
}
var rnd = new random2D(grid); // pre-process grid
document.write(JSON.stringify(rnd.select())); // get random index
Keeping a list of the rows which contain at least one true value only makes sense for very sparsely populated grids, where many rows contain no true values, so I haven't implemented it in the code example. If you do implement it, the look-up time for very sparse arrays is reduced to less than 1µs.
You can go with a dictionary implemented as a binary tree with logarithmic depth. This takes O(N^2) space and allows you to search/delete in O(log(N^2)) = O(logN) time. You can for example use Red-Black Tree.
The algorithm to find a random value might be:
t = tree.root
if (t == null)
throw Exception("No more values");
// logarithmic serach
while t.left != null or t.right != null
pick a random value k from range(0, 1, 2)
if (k == 0)
break;
if (k == 1)
if (t.left == null)
break
t = t.left
if (k == 2)
if (t.right == null)
break
t = t.right
result = t.value
// logarithmic delete
tree.delete(t)
return result
Of course, you can represent (i, j) indices as i * N + j.
Without additional memory you can't track changes to the state of cells. And in my opinion you can't get better than O(N^2) (iterating through the array).
I am relatively new to Swift and Playground. When experimenting in playground, I wrote a piece of Swift code to calculate the averages of 5 numbers
func avg (scores: [Int]) -> (Int){
var avg = 0
var total = 0
var count = 0
for score in scores {
total += score
count ++
} // Error: unexpected expression after operator
avg = total/count
return avg
}
let score = avg([10, 10, 10, 10, 10])
print(score)
However, it keeps giving me this error "unexpected expression after operator" (see above in comments in the code). Can someone please explain why.
The error message is a bit misleading.
The actual error reason is the space character between count and ++.
The postfix operator must follow the operand directly without any whitespace.
Anyhow you should always use the forward compatible syntax
count += 1
You can try it like this:
func average(scores: [Int]) -> Int {
var avg = 0
for number in numbers {
avg += score
}
var ave = (avg)/(scores.count)
return ave
}
And you're just dividing the total with the count which you've declared as 0. You need the count of the array [Int]. So you need to do scores.count which gives u the number of elements in that array.
for score in scores {
total += score
count ++
} // Error: unexpected expression after operator
The ++ shortcut will be removed in Swift 3. You need to do this
for score in scores {
total += score
count = count + 1
}
else you can't have a space between count and ++
so
count++
I usually find the answers to my questions by looking around here (I'm glad stackovergflow exists!), but I haven't found the answer to this one... I hope you can help me :)
I am using the projection.matrix() function from the "popbio" package to create transition matrices. In the function, you have to specify the "stage" and "fate" (both categorical variables), and the "fertilities" (a numeric column).
Everything works fine, but I would like to apply the function to 1:n fertility columns within the data frame, and get a list of matrices generated from the same categorical variables with the different fertility values.
This is how my data frame looks like (I only include the variables I am using for this question):
stage.fate = data.frame(replicate(2, sample(0:6,40,rep=TRUE)))
stage.fate$X1 = as.factor(stage.fate$X1)
stage.fate$X2 = as.factor(stage.fate$X2)
fertilities = data.frame(replicate(10,rnorm(40, .145, .045)))
df = cbind(stage.fate, fertilities)
colnames(df)[1:2]=c("stage", "fate")
prefix = "control"
suffix = seq(1:10)
fer.names = (paste(prefix ,suffix , sep="."))
colnames(df)[3:12] = c(fer.names)
Using
library(popbio)
projection.matrix(df, fertility=control.1)
returns a single transition matrix with the fertility values incorporated into the matrix.
My problem is that I would like to generate a list of matrices with the different fertility values in one go (in reality the length of my data is >=300, and the fertility columns ~100 for each of four different treatments...).
I will appreciate your help!
-W
PS This is how the function in popbio looks like:
projection.matrix =
function (transitions, stage = NULL, fate = NULL, fertility = NULL,
sort = NULL, add = NULL, TF = FALSE)
{
if (missing(stage)) {
stage <- "stage"
}
if (missing(fate)) {
fate <- "fate"
}
nl <- as.list(1:ncol(transitions))
names(nl) <- names(transitions)
stage <- eval(substitute(stage), nl, parent.frame())
fate <- eval(substitute(fate), nl, parent.frame())
if (is.null(transitions[, stage])) {
stop("No stage column matching ", stage)
}
if (is.null(transitions[, fate])) {
stop("No fate column matching ", fate)
}
if (missing(sort)) {
sort <- levels(transitions[, stage])
}
if (missing(fertility)) {
fertility <- intersect(sort, names(transitions))
}
fertility <- eval(substitute(fertility), nl, parent.frame())
tf <- table(transitions[, fate], transitions[, stage])
T_matrix <- try(prop.table(tf, 2)[sort, sort], silent = TRUE)
if (class(T_matrix) == "try-error") {
warning(paste("Error sorting matrix.\n Make sure that levels in stage and fate columns\n match stages listed in sort option above.\n Printing unsorted matrix instead!\n"),
call. = FALSE)
sort <- TRUE
T_matrix <- prop.table(tf, 2)
}
T_matrix[is.nan(T_matrix)] <- 0
if (length(add) > 0) {
for (i in seq(1, length(add), 3)) {
T_matrix[add[i + 0], add[i + 1]] <- as.numeric(add[i +
2])
}
}
n <- length(fertility)
F_matrix <- T_matrix * 0
if (n == 0) {
warning("Missing a fertility column with individual fertility rates\n",
call. = FALSE)
}
else {
for (i in 1:n) {
fert <- tapply(transitions[, fertility[i]], transitions[,
stage], mean, na.rm = TRUE)[sort]
F_matrix[i, ] <- fert
}
}
F_matrix[is.na(F_matrix)] <- 0
if (TF) {
list(T = T_matrix, F = F_matrix)
}
else {
T_matrix + F_matrix
}
}
<environment: namespace:popbio>
My question was answered via ResearchGate by Caner Aktas
Answer:
fertility.list<-vector("list",length(suffix))
names(fertility.list)<-fer.names
for(i in suffix) fertility.list[[i]]<-projection.matrix(df,fertility=fer.names[i])
fertility.list
Applying popbio “projection.matrix” to multiple fertilities and generate list of matrices?. Available from: https://www.researchgate.net/post/Applying_popbio_projectionmatrix_to_multiple_fertilities_and_generate_list_of_matrices#5578524f60614b1a438b459b [accessed Jun 10, 2015].
So using the regular MongoDB library in Ruby I have the following query to find average filesize across a set of 5001 documents:
avg = 0
total = collection.count()
Rails.logger.info "#{total} asset creation stats in the system"
collection.find().each {|row| avg += (row["filesize"] * (1/total.to_f)) if row["filesize"]}
Its pretty simple, so I'm trying to do the same using map/reduce as a learning exercise. This is what I came up with:
map = 'function(){emit("filesizes", {size: this.filesize, num: 1});}'
reduce = 'function(k, vals){
var result = {size: 0, num: 0};
for(var x in vals) {
var new_total = result.num + vals[x].num;
result.num = new_total
result.size = result.size + (vals[x].size * (vals[x].num / new_total));
}
return result;
}'
#results = collection.map_reduce(map, reduce)
However the two queries come back with two different results!
What am I doing wrong?
You're weighting the results by doing the division in every reduce function.
Say you had [{size : 5, num : 1}, {size : 5, num : 1}, {size : 5, num : 1}]. Your reduce would calculate:
result.size = 0 + (5*(1/1)) = 5
result.size = 5 + (5*(1/2)) = 7.25
result.size = 7.25 + (5*(1/3)) = 8.9
As you can see, this weights the results towards the earliest elements.
Fortunately, there's a simple solution. Just add a finalize function, which will be run once after the reduce step is finished.