I'm designing an algorithm that calculate the total price of a product.
The product can be composed by composite ingredient (X,Y,Z) or basic ingredient (a,b,c), the basic ingredient each is associated with a price.
The composite ingredient is itself composed by other composite or ingredient. E.g.: X (composite) = Z (composite) + a (basic ingredient)
Now to calculate the total price of a product, I now use a recursion algorithm that decompose each composite ingredient into basic ingredient, and sum up their price.
I want to know if there is better algorithm already to solve this kind of problems ?
Thanks.
Related
I'm trying to derive a measure of Tumours heterogeneity in scRNA-seq data. For a given individual's scRNA-seq gene expression matrix, I could like to calculate the Pearson correlations with and between clusters (compare the average cell-to-cell correlation within clusters and the average cell-to-cell correlation between clusters).
The idea is that if an individual had substantial transcriptional heterogeneity, the intercluster correlation would be negative and within-cluster correlation would be positive. If the individual's Tumours had a uniform transcriptional state, they would have a near-zero intercluster correlation. I am using the cor() function from the R 'stats' package and using the log-normalized gene expression matrix as input:
c2.dat <- data.frame(c2#assays$RNA#data) #gene expression matrix for a subject's cells in "cluster 2"
c2.cor <- cor(c2.dat, method = "pearson") #correlation analysis on log-normalized gene expression matrix
I am stuck though once I have a correlation matrix. How do I calculate the average cell-to-cell correlation within this cluster?
Thank you :)
I realise I've written this like a homework question, but that's because it's the simplest way for me to understand and try to relay the problem. It's something I want to solve for a personal project.
I have cards laid out in a grid, I am starting with a simple case of a 2x2 grid but want to be able to extrapolate to larger n×n grids eventually.
The cards are all face down, and printed on the faces of the cards are either a:
positive non-zero integer, representing the card's 'score'
or the black spot.
I am given the information of the sum of the scores of each row, the number of black spots in each row, and the sum of the scores of each column, and the number of black spots in each column.
So the top row must have a sum score of 1, and exactly one of the cards is a black spot.
The rightmost column must have a sum score of 2, and exactly one of the cards is a black spot.
Etc.
Of course we can see the above grid will "solve" to
Now I want to make a function that inputs the given information and produces the grid of cards that satisfies those constraints.
I am thinking I can use tuple-like arguments to the function.
And then every "cell" or card in the grid is itself a tuple, the first element of the tuple will be the score of the card there (or 0 if it is a black spot) and the second element will be a 1 if the card is a black spot or a 0 otherwise.
So the grid will have to resemble that ^^
I can find out what all the a, b, variables are by solving this system of equations:
(Knowing also that all of these numbers are integers which are ≥0).
I wanted to use this problem as a learning exercise in prolog, I think it seems like a problem Prolog will solve elegantly.
Have I made a good decision or is Prolog not a good choice?
I wonder how I can implement this in Prolog.
Prolog is very good for this kind of problems. Have a look clp(fd), that is Constraint Logic Programming in Finite Domains.
This snippet shows a primitive way how to solve your initial 2x2 example in SWI Prolog:
:- use_module(library(clpfd)).
test(Vars) :-
Vars = [TopLeft, TopRight, BottomLeft, BottomRight],
global_cardinality([TopLeft, TopRight], [0-1,1-_,2-_]), TopLeft + TopRight #= 1,
global_cardinality([TopLeft, BottomLeft], [0-1,1-_,2-_]), TopLeft + BottomLeft #= 1,
global_cardinality([BottomLeft, BottomRight], [0-1,1-_,2-_]), BottomLeft + BottomRight #= 2,
global_cardinality([TopRight, BottomRight], [0-1,1-_,2-_]), TopRight + BottomRight #= 2,
label(Vars).
Query:
?- test(Vars).
Vars = [1, 0, 0, 2].
You can take this as a starting point and generalize. Note that the black dot is represented as 0, because clp(fd) deals only with integers.
Here is the documentation: http://www.swi-prolog.org/man/clpfd.html
I have a square-planar lattice represented as an NxN grid Graph. Is there any way in Jung to get a symmetric pair of a specific vertex (given the axis of symmetry). Example: 8->0, 5->3.
My goal is to get distinct pairs of nodes. Since pairs (4,1), (4,7), (4,3) and (4,5) are essentially the same. (1,3) would be the same as (3,7) etc. Perhaps some algorithm can be performed on a matrix and then translated to the Graph.
General graphs aren't really particularly well-suited to this sort of thing, because they don't have a built-in notion of rows, columns, symmetry about an axis, etc.; they're all about topology, not geometry.
If you really want something like this, you should either create a subtype of Graph that has the operations you want, and create an implementation to match, or just create the corresponding matrix (and a mapping from matrix locations to graph nodes) and do the operations on that matrix instead.
So far I was able to write an algorithm rotating a matrix 3 times and keeping track of nodes at fixed indices. The same can be written for any type of Graph, using the node's visual coordinates instead of indices.
fun rotateMatrix(matrix: List<IntArray>): List<IntArray> {/*---*/}
val reflections = mutableListOf<Pair<Number, Number>>()
(0..2).fold(mat) { a, b ->
val new = rotateMatrix(a)
mat.forEachIndexed { x, e ->
e.forEachIndexed { y, e2 ->
reflections.add(mat[x][y] to new[x][y])
}
}
new
}
The result is a relationship describing that (0,2,8,6) are the "same"; (1,5,3,7) are the same etc. The only thing left to do is to use the output to determine which pairs of nodes correspond to which reflective siblings.
I have the following class:
class Person
{
GenderEnum Gender;
RaceEnum Race;
double Salary;
...
}
I want to create 1000 instances of this class such that the collection of 1000 Persons follow these 5 demographic statistics:
50% male; 50% female
55% white; 20% black; 15% Hispanic; 5% Asian; 2% Native American; 3% Other;
10% < $10K; 15% $10K-$25K; 35% $25K-$50K; 20% $50K-$100K; 15% $100K-$200K; 5% over $200K
Mean salary for females is 77% of mean salary for males
Mean Salary as a percentage of mean white salary:
white - 100%.
black - 75%.
Hispanic - 83%.
Asian - 115%.
Native American - 94%.
Other - 100%.
The categories above are exactly what I want but the percentages given are just examples. The actual percentages will be inputs to my application and will be based on what district my application is looking at.
How can I accomplish this?
What I've tried:
I can pretty easily create 1000 instances of my Person class and assign the Gender and race to match my demographics. (For my project I'm assuming male/female ratio is independent of race). I can also randomly create a list of salaries based on the specified percentage brackets. Where I run into trouble is figuring out how to assign those salaries to my Person instances in such a way that the mean salaries across gender and mean salaries across race match the specified conditions.
I think you can solve this by assuming that the distribution of income for all categories is the same shape as the one you gave, but scaled by a factor which makes all the values larger or smaller. That is, the income distribution has the same number of bars and the same mass proportion in each bar, but the bars are shifted towards smaller values or towards larger values, and all bars are shifted by the same factor.
If that's reasonable, then this has an easy solution. Note that the mean value of the income distribution over all people is sum(p[i]*c[i], i, 1, #bars), which I'll call M, where p[i] = mass proportion of bar i and c[i] = center of bar i. For each group j, you have the mean sum(s[j]*p[i]*c[i], i, 1, #bars) = s[j]*M where s[j] is the scale factor for group j. Furthermore you know that the overall mean is equal to the sum of the means of the groups, weighting each by the proportion of people in that category, i.e. M = sum(s[j]*M*q[j], j, 1, #groups) where q[j] is the proportion of people in the group. Finally you are given specific values for the mean of each group relative to the mean for white people, i.e. you know (s[j]*M)/(s[k]*M) = s[j]/s[k] = some fraction, where k is the index for the white group. From this much you can solve these equations for s[k] (the scaling factor for the white group) and then s[j] from that.
I've spelled this out for the racial groups only. You can repeat the process for men versus women, starting with the distribution you found for each racial group and finding an additional scaling factor. I would guess that if you did it the other way, gender first and then race, you would get the same results, but although it seems obvious I wouldn't be sure unless I worked out a proof of it.
I am working on a scoring system for tickets and each ticket can potentially have up to 4 kinds of different scores. What I would like to do is to combine these four scores in the a final score and prioritize the tickets. I'd also like to assign a weight to each of the 4 score. The details of the 4 scores are listed below:
Score A: 1-5 scale, desired relative weight: 2
Score B: 1-4 scale, desired relative weight: 3
Score C: 1-10 scale, desired relative weight: 2
Score D: 1-5 scale, desired relative weight: 1
Some requirements:
(1) Each ticket may come with arbitrary number of scores, so sometimes we have all 4, sometimes we have no scores(Default final score needed).
(2) If the ticket gets a high score from multiple sources, the final score should be even higher, vice versa.
(3) The score with higher weight plays a bigger role in deciding the final score
(4) The final score should be in 1-4 scale.
I wonder if there are any existing algorithm for solving this kind of issue? Thanks ahead.
Desired input and output example:
(1) Input: {A:N/A, B:4, C:9, D:N/A}
Output: {Final: 4}
Since for both score it's a high score
(2) Input: {A:3, B:N\A, C:8, D:1}
Output: {Final:3}
Although score D is small, it has small weight, so still we get a relative big final score.
(3) Input: {A:N\A, B:N\A, C:N\A, D:N\A}
Output: {Final:2}
Arguable default score.
The overall idea is to rank the tickets according to the four scores.
Define a initial relative weight W for every score.
Convert every initial score S from it's initial scale A into a universal score S' on a universal scale B from minB to maxB.
If a score is missing give it a default value for example
Calculate the final score with your new S
a and b are your weights for the score and the weight.
If you make a large, then only really the biggest score will play a value, if you make b large, then only really the biggest weight will play a value.
Having a and b between [1;2] shouldn't be too extreme. With a or b being 1 you will have a normal weighting system, that doesn't weight bigger scores more.