How do I combine two properties (each has opposite impact) of the data points to filter out the best data point? - data-structures

This question is more logical than programming. My dataset has several data points (you can think the dataset as an Array and the data points as it's elements). Each data point is defined by its two properties. For example, if x is one of the data points among X data points in the dataset, a and b are the properties or characteristics of x. Here, larger the value of a (that ranges from 0 to 1, think it as a probability), x has a good chance to be selected. Moreover, larger the value of b (think it as any number that is larger than 1), x has the least chance to be selected. Among X data points from the X, I need to select a data point that has the maximum a value and minimum b value. Note that there may be some instances when a single data point may not hold both the conditions at the same time. For example, x my have the largest a value but not the least b value at the same time and vice-versa. Hence, I want to combine both a and b to yield another meaningful weight value that helps me to filter out the right data point from X.
If there any mathematical solution to my problem?

Related

Algorithm to assign best value between points based on distance

I am having trouble figuring out an algorithim to best assign values to different points on a diagram based on the distance between the points.
Essentially, I am given a diagram with a block and a dynamic amount of points. It should look something like this:
I am then given a list of values to assign to each point. Here are the rules and info:
I know the Lat,Long values for each point and the central block. In other words, I can get the direct distance from every object to another.
The list of values may be shorter that the total number of points. In this case, values can be repeated multiple times.
In the case where values must be repeated, the duplicate values should be as far away as possible from one another.
Here is an example using a value list of {1,2}:
In reality, this is a very simple example. In truth, there may be thousands of points.
Find out how many values you need to repeat, in your example you have 2 values and 5 points so, you need to have 2 repetition for 2 values, then you will have 2x2=4 positions [call this pNum] (you have to use different pairs as much as possible so that they are far apart from each other).
Calculate a distance array then find the max pNum values in the array, in other words find the greates 4 values in the array in your example.
assigne the repeated values for the the points found most far apart, and assign the rest of the points based on the array distance values.

Randomly Partition versus Partition then Shuffle

Given a set of n data points generated from the same distribution, I want to "randomly partition" the set into k groups, where each contains n / k points randomly chosen from the original data set.
Alternatively, I can first divide the input data set into k contiguous chunks, where the first chunk contains 1, ..., n/k, and the second chunk contains n/k+1, ..., 2n/k, and so on. Then I "shuffle" the data points within each partition.
Are these two approaches always equal, given the data set are generated from the same distribution? If not, what assumptions do we need when these two approaches produces the same results?
Obviously they are not equivalent; the second restricts the values that can go in each partition, while the first does not.
If by "results" you mean what is done with these partitions, that would be wholly dependent on just what that is, which you provide no hint to.

efficient way for finding min value on each given region

Given a
we first define two real-valued functions and as follows:
and we also define a value m(X) for each matrix X as follows:
Now given an , we have many regions of G, denoted as . Here, a region of G is formed by a submatrix of G that is randomly chosen from some columns and some rows of G. And our problem is to compute as fewer operations as possible. Is there any methods like building hash table, or sorting to get the results faster? Thanks!
========================
For example, if G={{1,2,3},{4,5,6},{7,8,9}}, then
G_1 could be {{1,2},{7,8}}
G_2 could be {{1,3},{4,6},{7,9}}
G_3 could be {{5,6},{8,9}}
=======================
Currently, for each G_i we need mxn comparisons to compute m(G_i). Thus, for m(G_1),...,m(G_r) there should be rxmxn comparisons. However, I can notice that G_i and G_j maybe overlapped, so there would be some other approach that is more effective. Any attention would be highly appreciated!
Depending on how many times the min/max type data is needed, you could consider a matrix that holds min/max information in-between the matrix values, i.e. in the interstices between values.. Thus, for your example G={{1,2,3},{4,5,6},{7,8,9}} we would define a relationship matrix R sized ((mxn),(mxn),(mxn)) and having values from the set C = {-1 = less than, 0 = equals and 1 = greater than}.
R would have nine relationship pairs (n,1), (n,2) to (n,9) where each value would be a member of C. Note (n,n is defined and will equal 0). Thus, R[4,,) = (1,1,1,0,-1,-1,-1,-1,-1). Now consider any of your subsets G_1 ..., Knowing the positional relationships of a subset's members will give you offsets into R which will resolve to indexes into each R(N,,) which will return the desired relationship information directly without comparisons.
You, of course, will have to decide if the overhead in space and calculations to build R exceeds the cost of just computing what you need each time it's needed. Certain optimizations including realization that the R matrix is reflected along the major diagonal and that you could declare "equals" to be called, say, less than (meaning C has only two values) are available. Depending on the original matrix G, other optimizations can be had if it is know that a row or column is sorted.
And since some computers (mainframes, supercomputers, etc) store data into RAM in column-major order, store your dataset so that it fills in with the rows and columns transposed thus allowing column-to-column type operations (vector calculations) to actually favor the columns. Check your architecture.

Drawing sum of values across multiple overlapping dynamic intervals

I have a an array of intervals [a,b] (where [a,b] = set of all x such that a<=x<=b). Each one of these intervals has a value associated with it (think of it as the cost of something across such interval). Intervals can overlap. Intervals are dynamic (they can be added, removed, translated, and their size can be changed). Also, the value associated with any of such intervals can change.
I need to create a graph containing the sum of all such values across interval [start, end] which is defined as the interval containing all of such intervals. In order to do so I need an ordered list of where, along the real line, such values change, as well as what values they are changing between. Such list needs to be easily / quickly updated when an interval in the original array changes.
Side notes: assume not very large number of intervals (hundreds?).
Any suggestions on data structures / algorithms to do this effectively?
Interval tree is able to perform such operations

Which algorithm will be required to do this?

I have data of this form:
for x=1, y is one of {1,4,6,7,9,18,16,19}
for x=2, y is one of {1,5,7,4}
for x=3, y is one of {2,6,4,8,2}
....
for x=100, y is one of {2,7,89,4,5}
Only one of the values in each set is the correct value, the rest is random noise.
I know that the correct values describe a sinusoid function whose parameters are unknown. How can I find the correct combination of values, one from each set?
I am looking something like "travelling salesman"combinatorial optimization algorithm
You're trying to do curve fitting, for which there are several algorithms depending on the type of curve you want to fit your curve to (linear, polynomial, etc.). I have no idea whether there is a specific algorithm for sinusoidal curves (Fourier approximations), but my first idea would be to use a polynomial fitting algorithm with a polynomial approximation of the sine.
I wonder whether you need to do this in the course of another larger program, or whether you are trying to do this task on its own. If so, then you'd be much better off using a statistical package, my preferred one being R. It allows you to import your data and fit curves and draw graphs in just a few lines, and you could also use R in batch-mode to call it from a script or even a program (this is what I tend to do).
It depends on what you mean by "exactly", and what you know beforehand. If you know the frequency w, and that the sinusoid is unbiased, you have an equation
a cos(w * x) + b sin(w * x)
with two (x,y) points at different x values you can find a and b, and then check the generated curve against all the other points. Choose the two x values with the smallest number of y observations and try it for all the y's. If there is a bias, i.e. your equation is
a cos(w * x) + b sin(w * x) + c
You need to look at three x values.
If you do not know the frequency, you can try the same technique, unfortunately the solutions may not be unique, there may be more than one w that fits.
Edit As I understand your problem, you have a real y value for each x and a bunch of incorrect ones. You want to find the real values. The best way to do this is to fit curves through a small number of points and check to see if the curve fits some y value in the other sets.
If not all the x values have valid y values then the same technique applies, but you need to look at a much larger set of pairs, triples or quadruples (essentially every pair, triple, or quad of points with different y values)
If your problem is something else, and I suspect it is, please specify it.
Define sinusoid. Most people take that to mean a function of the form a cos(w * x) + b sin(w * x) + c. If you mean something different, specify it.
2 Specify exactly what success looks like. An example with say 10 points instead of 100 would be nice.
It is extremely unclear what this has to do with combinatorial optimization.
Sinusoidal equations are so general that if you take any random value of all y's these values can be fitted in sinusoidal function unless you give conditions eg. Frequency<100 or all parameters are integers,its not possible to diffrentiate noise and data theorotically so work on finding such conditions from your data source/experiment first.
By sinusoidal, do you mean a function that is increasing for n steps, then decreasing for n steps, etc.? If so, you you can model your data as a sequence of nodes connected by up-links and down-links. For each node (possible value of y), record the length and end-value of chains of only ascending or descending links (there will be multiple chain per node). Then you scan for consecutive runs of equal length and opposite direction, modulo some initial offset.

Resources