I have some input data like this.
unique ID
Q1
Q2
Q3
1
1
1
2
2
1
1
2
3
1
0
3
4
2
0
1
5
3
1
2
6
4
1
3
And my target is to extract some data which satisfy the following conditions:
total count: 4
Q1=1 count: 2
Q1=2 count: 1
Q2=1 count: 1~3
Q3=1 count: 1
In this case, both data set with ids [1, 2, 4, 5] or [2, 3, 4, 5] are acceptable answers.
In reality, I will possibly have 6000+ rows of data and up to 12 count limitation like above. The count might varies from 1 to 50.
I've written a solution which firstly group all ids by each condition, then use deapth first search to exhaustedly try out all possible combinations between the groups. (I believe this is a brute-force solution...)
However, I always run out my computer's memory and my time before I can get a possible answer.
My question is,
what's the possible least time complexity of this problem. (I believe this is kind of subset sum problem, but I am not sure)
how can I solve this problem instead of a brute-force one? I'm considering dynamic programming or decision tree. However, I believe that I will possibly run out of my computer's memory with either of this one. Or can I solve this problem by each data row's probabilities/entropy (and I would appreciate more details on this)?
My brute-force solution sample codes are not worth reading at all. Thus, I'll skip posting my code snippets...
I have a old rating from the db and a new Rating from the user,
i tried to search for "rating algorithm" but they save the ratings per user. In my case i don't save the previous ratings. My rating bar is up to max of 5
currently my solution is oldR + newR/ 2. does this make sense?
Not really. When you think about it, such a formula would mean that newer votes are weighted much more than old ones. Imagine a sequence of votes like this:
Vote: 1 1 1 1 1 1 1 1 1 1 1 5
Rating: 1 1 1 1 1 1 1 1 1 1 1 3
Clearly the rating should not be 3 in this case, it should still be 1 (or at worst 2), but with your formula it will be.
At the very least you should store the number of votes as well as the average rating, allowing you to calculate newR = ((oldR*votesCast)+newVote)/(votesCast+1). This also requires to store the rating with a higher precision, not just as an integer. (You can round it off when you display it but internally you should keep track of fractions too.)
A slightly better solution is to separately store how many votes have been cast for the 5 different ratings so far, allowing you to calculate different kinds of means (geometric for example).
And obviously the most flexible (but most storage and computation intensive) is to store each individual vote with user id and timestamp, allowing you to use any algorithm you can think of.
I have already asked a similar question at Calculating Word Proximity in an inverted Index.
However i felt that the question was too general and not refined enough. So here goes.
I have a List which contains the location of tokens in a document. for each token it goes as
public List<int> hitLocation;
Lets say the the document is
Java programming language has a name similar to java island in Indonesia however
local language in java bears no resemblance to the programming language called java.
and the query is
java island language
So Say i lock on to the Java HitList and attempt to directly calculate the distance between the Java HisList, Island HitList and Language Hitlist.
Now the first problem is that there are 4 java tokens occurrences in the sentence. Which one do i select. Assuming i select the first one.
I go onto the island token list and after comparing find it that it adjacent to the second occurrence of java. So i change my selection and lock onto the second occurrence of java.
Proceeding to the third token language i find that it situated at quite a distance from our selection however i find it that it is quite near the first java occurrence.
So you see the dilemma here if now again revert back to the original selection i.e the first occurrence of java the distance to second token "island" increases and if i stay with my current selection the sheer distance of the second occurrence of the token "language" will make relevance busted.
Previously there was the suggestion of dot product however i am at loss on how to proceed forward with that option.
Any other solution would also be welcomed.
I Understand that this question is quite detailed. However i have searched long and hard and haven't found any question like this on this topic.
I feel if this question is answered it will be a great addition to the community and will make anybody who is designing anything related to relevancy quite happy.
Thank You.
You seem to be using the hit lists a little differently then how they are intended to be used (at least given my understanding).
Typically people compare hit lists returned by different documents. This is how they rank one document as being "more relevant" than a different document.
That said, if you want to find all locations of some multi-word phrase like "java island" given the locations of the words "java" and "island" you would...
Get a list of locations for "java"
Get a list of locations for "island"
Sort both lists
Iterate through both lists at the same time. You start be getting the first entry of both lists. Now test this pair of entries. I.E., if these entries are "off by one" you have found one instance of "java island" (or perhaps "island java"). Get the next entry in the list that currently shows the minimum value. Test this new pair of entries. Repeat.
BTW -- The dot product is more useful when comparing 2 different documents.
Well, since you explicitly ask about the dot product suggestion, i'll try to explain a little more formally what I had in mind. Keep in mind that it's not very efficient as it might convert the complexity from basing on the lengths of the hitlists, into something based on the length of the text (unless there's some trick to cut that).
My initial thought was to convert each hitlist into a series of binary value at the text length, high where there's a hit and low otherwise.
for e.g. java would look
1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1
But since you want proximity, convert each occurrence into a pyramid, for e.g. -
3 2 1 0 0 0 1 2 3 2 1 0 0 0 1 2 3 2 0 0 0 0 0 1 2 3
Same way for island -
0 0 0 0 0 0 0 1 2 3 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Now a dot product would give you some sort of proximity "score" between the two vectors, since it accumulates all the locations where two words are close (the closer the better). Java and island can be said to have a mutual score of 16. For a higher threshold you could stretch the pyramid further, or play with the shape.
Now, here you add another suggestion that this method isn't very suited for, you also want to catch the exact location of highest proximity, this isn't very well defined IMHO, what if word1 matches word2 (at some level) in position1, but word2 matches word3 at the same level in position2 - what location would you want?
Also, keep in mind that this method is O(text_length * words^2), that might be good in some cases, but very bad for others (if you're searching the bible for e.g.)
I am looking for a solution for a task similar to the Tower of Hanoi task, however this is different from Hanoi as the disks are not constrained by size. The Tower of London task I am creating has 8 disks, instead of the traditional 3 or 5 (as shown in the Wikipedia link). I am using PEBL software that is "programmed primarily in C++ (although you do not need to know C++ to use PEBL), but also uses flex and bison (GNU versions of lex and yacc) to handle parsing."
Here is a video of what the task looks like in action: http://www.youtube.com/watch?v=IiBJ94HRpeM&noredirect=1
*Each disk is a number. e.g., blue disk=1, red disk = 2, etc.
1 \
2 ----\
3 ----/ 3 1
4 5 / 2 4 5
========= =========
The left side consists of the disks you have to move, to match the right side. There are 3 columns.
So if I am making it with 8 disks, I would create a trial to look like this:
1 \
2 ----\ 7 8
6 3 8 ----/ 3 6 1
7 4 5 / 2 4 5
========= =========
How do I figure out what is the minimum amount of moves needed for the left to look like the right? I don't need to use PEBL to code this, but I need to know since I am calculating how close to the minimum a person would get for each trial.
The principle is easy and its called breadth first search:
Each state has a certain number of successor states (defined by the moves possible).
You start out with a set of states that contains the initial state and step number 0.
If the end state is in the set of states, return the step number.
Increment the step number.
Rebuild the set of states by replacing the current states with each of their successor states.
Go to 2
So, in each step, compute the successor states of your currently available states and look if you reached the target state.
BUT, be warned, this can take a while and eat up a lot of memory!
You can optimize a bit in our case, since you can leave out the predecessor state.
Still, you will have 5 possible moves in most states. Which means you will have 5^N states to consider after N steps.
For example, your second example will need 10 moves, if I don't err. This will give you about 10 million states. Most contemporary computers will not be able to search beyond depth 15.
I think that an algorithm to find a solution would be easy and fast, but we have no proof this solution would be the shortest one.
Consider a sales department that sets a sales goal for each day. The total goal isn't important, but the overage or underage is. For example, if Monday of week 1 has a goal of 50 and we sell 60, that day gets a score of +10. On Tuesday, our goal is 48 and we sell 46 for a score of -2. At the end of the week, we score the week like this:
[0,0]=10,[0,1]=-2,[0,2]=1,[0,3]=7,[0,4]=6
In this example, both Monday (0,0) and Thursday and Friday (0,3 and 0,4) are "hot"
If we look at the results from week 2, we see:
[1,0]=-4,[1,1]=2,[1,2]=-1,[1,3]=4,[1,4]=5
For week 2, the end of the week is hot, and Tuesday is warm.
Next, if we compare weeks one and two, we see that the end of the week tends to be better than the first part of the week. So, now let's add weeks 3 and 4:
[0,0]=10,[0,1]=-2,[0,2]=1,[0,3]=7,[0,4]=6
[1,0]=-4,[1,1]=2,[1,2]=-1,[1,3]=4,[1,4]=5
[2,0]=-8,[2,1]=-2,[2,2]=-1,[2,3]=2,[2,4]=3
[3,0]=2,[3,1]=3,[3,2]=4,[3,3]=7,[3,4]=9
From this, we see that the end of the week is better theory holds true. But we also see that end of the month is better than the start. Of course, we would want to next compare this month with next month, or compare a group of months for quarterly or annual results.
I'm not a math or stats guy, but I'm pretty sure there are algorithms designed for this type of problem. Since I don't have a math background (and don't remember any algebra from my earlier days), where would I look for help? Does this type of "hotspot" logic have a name? Are there formulas or algorithms that can slice and dice and compare multidimensional arrays?
Any help, pointers or advice is appreciated!
This data isn't really multidimensional, it's just a simple time series, and there are many ways to analyse it. I'd suggest you start with the Fourier Transform, it detects "rhythms" in a series, so this data would show a spike at 7 days, and also around thirty, and if you extended the data set to a few years it would show a one-year spike for seasons and holidays. That should keep you busy for a while, until you're ready to use real multidimensional data, say by adding in weather information, stock market data, results of recent sports events and so on.
The following might be relevant to you: Stochastic oscillators in technical analysis, which are used to determine whether a stock has been overbought or oversold.
I'm oversimplifying here, but essentially you have two moving calculations:
14-day stochastic: 100 * (today's closing price - low of last 14 days) / (high of last 14 days - low of last 14 days)
3-day stochastic: same calculation, but relative to 3 days.
The 14-day and 3-day stochastics will have a tendency to follow the same curve. Your stochastics will fall somewhere between 1.0 and 0.0; stochastics above 0.8 are considered overbought or bearish, below 0.2 indicates oversold or bullish. More specifically, when your 3-day stochastic "crosses" the 14-day stochastic in one of those regions, you have predictor of momentum of the prices.
Although some people consider technical analysis to be voodoo, empirical evidence indicates that it has some predictive power. For what its worth, a stochastic is a very easy and efficient way to visualize the momentum of prices over time.
It seems to me that an OLAP approach (like pivot tables in MS Excel) fit the problem perfectly.
What you want to do is quite simple - you just have to calculate the autocorrelation of your data and look at the correlogram. From the correlogram you can see 'hidden' periods of your data and then you can use this information to analyze the periods.
Here is the result - your numbers and their normalized autocorrelation.
10 1,000
-2 0,097
1 -0,121
7 0,084
6 0,098
-4 0,154
2 -0,082
-1 -0,550
4 -0,341
5 -0,027
-8 -0,165
-2 -0,212
-1 -0,555
2 -0,426
3 -0,279
2 0,195
3 0,000
4 -0,795
7 -1,000
9
I used Excel to get the values. But the sequence in column A and add the equation =CORREL($A$1:$A$20;$A1:$A20) to cell B1 and copy it then up to B19. If you the add a line diagram, you can nicely see the structure of the data.
You can already make reasonable guesses about the periods of patterns - you're looking at things like weekly and monthly. To look for weekly patterns, for example, just average all the mondays together and so on. Same goes for days of the month, for months of the year.
Sure, you could use a complex algorithm to find out that there's a weekly pattern, but you already know to expect that. If you think there really may be patterns buried there that you'd never suspect (there's a strange community of people who use a 5-day week and frequent your business), by all means, use a strong tool -- but if you know what kinds of things to look for, there's really no need.
Daniel has the right idea when he suggested correlation but I don't think autocorrelation is what you want. Instead I would suggest correlating each week with each other week. Peaks in your correlation--that is values close to 1--suggest that the values of the weeks resemble each other (I.e. are peiodic) for that particular shift.
For example when you cross correlate
0 0 1 2 0 0
with
0 0 0 1 1 0
the result would be
2 0 0 1 3 0
the highest value is 3, which corresponds to shifting (right) the second array by 4
0 0 0 1 1 0 --> 0 0 1 1 0 0
and thenn multiplying component wise
0 0 1 2 0 0
0 0 1 1 0 0
----------------------
0 + 0 + 1 + 2 + 0 + 0 = 3
Note that when you correlate you can create your own "fake" week and cross-correlate all your real weeks, the idea being that you are looking for "shapes" of your weekly values that correspond to the shape of your fake week by looking for peaks in the correlation result.
So if you are interested in finding weeks that are close near the end of the week you could use the "fake" week
-1 -1 -1 -1 1 1
and if you get a high response in the first value of the correlation this means that the real week that you correlated with has roughly this shape.
This is probably beyond the scope of what you're looking for, but one technical approach that would give you the ability to do forecasting, look at things like statistical significance, etc., would be ARIMA or similar Box-Jenkins models.