i have to get a value by developing a list through various clauses, but do not know the best combination between the clauses, because each clause removes items from the initial list and the subsequent works on the remaining list.
Is it possible to create a single clause that you find the best combination of the clauses?
calculate(List,Value):-
calculate_value1(List,R,Value1),
calculate_value1(List,R,Value2),
calculate_value1(List,R,Value3),
max([Value1,Valu2,Value3],Value).
calculate_value1(List,Rest,Value1):-
funcA(List,Rest1,ValueA),
funcB(Rest1,Rest2,ValueB),
funcC(Rest2,Rest3,ValueC).
Value1 is ValueA + ValueB + ValueC.
calculate_value2(List,Rest,Value2):-
funcB(List,Rest1,ValueB),
funcA(Rest1,Rest2,ValueA),
funcC(Rest2,Rest3,ValueC).
Value2 is ValueA + ValueB + ValueC.
calculate_value3(List,Rest,Value3):-
funcC(List,Rest1,ValueC),
funcB(Rest1,Rest2,ValueB),
funcA(Rest2,Rest3,ValueA).
Value3 is ValueA + ValueB + ValueC.
thank you.
I have to compare two lists and I have to be able to find the best balance between them. Then run on lists various clauses that identify if there are identical elements between the first and the second , and then with a ratio of 100 % or if one of a list is the sum of the other second . Also check if they are neighbors who have close relationship , 110 is closer to 100 than to 150 . But the data is not only numeric only .
Now I have several separate clauses : equals ( ) , that identifies the elements equally between the two lists , sum (), which identifies items with a ratio of the sum of them, multiply (), etc. ....
For each clause in the input do I get a list and a list of items that met the criteria of that clause (sum , multiplication, c ....) , with the percentage found and a list of remaining elements that give input to the clause next .
In doing so , however, is a procedural program , because I first calculate the elements the same, then the sum , etc ...
I would like to be able to create a dynamic program that is able to identify the best percentage in any order by clauses .
I hope I was more clear .
Related
I found here that i can select random nodes from neo4j using next queries:
MATCH (a:Person) RETURN a ORDER BY rand() limit 10
MATCH (a:Person) with a, rand() as rnd RETURN a ORDER BY rnd limit 10
Both queries seems to do the same thing but when I try to match random nodes that are in relationship with a given node then I have different results:
The next query will return always the same nodes (nodes are not randomly selected)
MATCH (p:Person{user_id: '1'})-[r:REVIEW]->(m:Movie)
return m order by rand() limit 10
...but when I use rand() in a with clause I get indeed random nodes:
MATCH (p:Person{user_id: '1'})-[r:REVIEW]->(m:Movie)
with m, rand() as rnd
return m order by rnd limit 10
Any idea why rand() behave different in a with clause in the second query but in the first not?
It's important to understand that using rand() in the ORDER BY like this isn't doing what you think it's doing. It's not picking a random number per row, it's ordering by a single number.
It's similar to a query like:
MATCH (p:Person)
RETURN p
ORDER BY 5
Feel free to switch up the number. In any case, it doesn't change the ordering because ordering every row, when the same number is used, doesn't change the ordering.
But when you project out a random number in a WITH clause per row, then you're no longer ordering by a single number for all rows, but by a variable which is different per row.
In my application I need to compare parts of lists of sets to see if they contain the same elements. I have basically the following structure:
List 1 Index Set
1 (1,5)
2 (3,7)
3 ()
4 (1,9,15)
I have something about 20 Lists an more than thousand sets in each list. The Sets in the list can be empty or can contain up to hundreds of elements.
I need to create the union of those sets for different intervals of my lists.
So for example I want to compare intervals of the former list with the follwoing list:
List 2 Index Set
1 (3,6,9)
2 (2)
3 (20)
Comparing Interval List 1 from 2 to 4 with Interval List 2 from 1 to 2 should give (3,9)
Currently I use a brute force method simply running throu both list an comparing each set. Is there a more efficient solution?
Thanks in advance
One approach could be to create for each such list, an auxiliary list, that contains histogram in each index of elements that appeared in sets up to now.
In your example:
List Index histogram
1 [1=1, 5=1]
2 [1=1, 3=1, 5=1, 7=1]
3 [1=1, 3=1, 5=1, 7=1]
4 [1=2, 3=1, 5=1, 7=1, 9=1, 15=1]
Now, given two indices, i,j - you can create the union set of the sets in indices i,i+1,...,j by taking two histograms: hist1=list[i-1], hist2=list[j], and return all elements x such that hist1.get(x) < hist2.get(x), and get the union set without actually iterating the list.
For example, in the above list, if you want to find the union list for indices 2,3,4:
hist1=list[1] = [1=1, 5=1]
hist2=list[4] = [1=2, 3=1, 5=1, 7=1, 9=1, 15=1]
hist2-hist1 = [1=2-1, 3=1-0, 5=1-1, 7=1-0, 9=1-0, 15=1-0] =
= [1=1, 3=1, 5=0, 7=1, 9=1, 15=1]
union_set = {1,3,7,9,15}
This approach is especially useful when sets are considerably smaller than the lists, which seems to be your case.
Lets say I have a matrix x=[ 1 2 1 2 1 2 1 2 3 4 5 ]. To look at its histogram, I can do h=hist(x).
Now, h with retrieve a matrix consisting only the number of occurrences and does not store the original value to which it occurred.
What I want is something like a function which takes a value from x and returns number of occurrences of it. Having said that, what one thing histeq does should we admire is, it automatically scales nearest values according!
How should solve this issue? How exactly people do it?
My reason of interest is in images:
Lets say I have an image. I want to find all number of occurrences of a chrominance value of image.
I'm not really shure what you are looking for, but if you ant to use hist to count the number of occurences, use:
[h,c]=hist(x,sort(unique(x)))
Otherwise hist uses ranges defined by centers. The second output argument returns the corresponding number.
hist has a second return value that will be the bin centers xc corresponding to the counts n returned in form of the first return value: [n, xc] = hist(x). You should have a careful look at the reference which describes a large number of optional arguments that control the behavior of hist. However, hist is way too mighty for your specific problem.
To simply count the number of occurrences of a specific value, you could simply use something like sum(x(:) == 42). The colon operator will linearize your image matrix, the equals operator will yield a list of boolean values with 1 for each element of x that was 42, and thus sum will yield the total number of these occurrences.
An alternative to hist / histc is to use bsxfun:
n = unique(x(:)).'; %'// values contained in x. x can have any number of dims
y = sum(bsxfun(#eq, x(:), n)); %// count for each value
I would like to do a database lookup based on a 10 digit numeric value where only the first n digits are significant. Assume that there is no way in advance to determine n by looking at the value.
For example, I receive the value 5432154321. The corresponding entry (if it exists) might have key 54 or 543215 or any value based on n being somewhere between 1 and 10 inclusive.
Is there any efficient approach to matching on such a string short of simply trying all 10 possibilities?
Some background
The value is from a barcode scan. The barcodes are EAN13 restricted circulation numbers so they have the following structure:
02[1234567890]C
where C is a check sum. The 10 digits in between the 02 and the check sum consist of an item identifier followed by an item measure. There might be a check digit after the item identifier.
Since I can't depend on the data to adhere to any single standard, I would like to be able to define on an ad-hoc basis, how particular barcodes are structured which means that the portion of the 10 digit number that I extract, can be any length between 1 and 10.
Just a few ideas here:
1)
Maybe store these numbers in reversed form in your DB.
If you have N = 54321 you store it as N = 12345 in the DB.
Say N is the name of the column you stored it in.
When you read K = 5432154321, reverse this one too,
you get K1 = 1234512345, now check the DB column N
(whose value is let's say P), if K1 % 10^s == P,
where s=floor(Math.log(P) + 1).
Note: floor(Math.log(P) + 1) is a formula for
the count of digits of the number P > 0.
The value floor(Math.log(P) + 1) you may also
store in the DB as precomputed one, so that
you don't need to compute it each time.
2) As this 1) is kind of sick (but maybe best of the 3 ideas here),
maybe you just store them in a string column and check it with
'like operator'. But this is trivial, you probably considered it
already.
3) Or ... you store the numbers reversed, but you also
store all their residues mod 10^k for k=1...10.
col1, col2,..., col10
Then you can compare numbers almost directly,
the check will be something like
N % 10 == col1
or
N % 100 == col2
or
...
(N % 10^10) == col10.
Still not very elegant though (and not quite sure
if applicable to your case).
I decided to check my idea 1).
So here is an example
(I did it in SQL Server).
insert into numbers
(number, cnt_dig)
values
(1234, 1 + floor(log10(1234)))
insert into numbers
(number, cnt_dig)
values
(51234, 1 + floor(log10(51234)))
insert into numbers
(number, cnt_dig)
values
(7812334, 1 + floor(log10(7812334)))
select * From numbers
/*
Now we have this in our table:
id number cnt_dig
4 1234 4
5 51234 5
6 7812334 7
*/
-- Note that the actual numbers stored here
-- are the reversed ones: 4321, 43215, 4332187.
-- So far so good.
-- Now we read say K = 433218799 on the input
-- We reverse it and we get K1 = 997812334
declare #K1 bigint
set #K1 = 997812334
select * From numbers
where
#K1 % power(10, cnt_dig) = number
-- So from the last 3 queries,
-- we get this row:
-- id number cnt_dig
-- 6 7812334 7
--
-- meaning we have a match
-- i.e. the actual number 433218799
-- was matched successfully with the
-- actual number (from the DB) 4332187.
So this idea 1) doesn't seem that bad after all.
Have these two tables:
TableA
ID Opt1 Opt2 Type
1 A Z 10
2 B Y 20
3 C Z 30
4 C K 40
and
TableB
ID Opt1 Type
1 Z 57
2 Z 99
3 X 3000
4 Z 3000
What would be a good algorithm to find arbitrary relations between these two tables? In this example, I'd like it to find the apparent relation between records containing Op1 = C in TableA and Type = 3000 in TableB.
I could think of apriori in some way, but doesn't seems too practical. what you guys say?
thanks.
It sounds like a relational data mining problem. I would suggest trying Ross Quinlan's FOIL: http://www.rulequest.com/Personal/
In pseudocode, a naive implementation might look like:
1. for each column c1 in table1
2. for each column c2 in table2
3. if approximately_isomorphic(c1, c2) then
4. emit (c1, c2)
approximately_isomorphic(c1, c2)
1. hmap = hash()
2. for i = 1 to min(|c1|, |c2|) do
3. hmap[c1[i]] = c2[i]
4. if |hmap| - unique_count(c1) < error_margin then return true
5. else then return false
The idea is this: do a pairwise comparison of the elements of each column with each other column. For each pair of columns, construct a hash map linking corresponding elements of the two columns. If the hash map contains the same number of linkings as unique elements of the first column, then you have a perfect isomorphism; if you have a few more, you have a near isomorphism; if you have many more, up to the number of elements in the first column, you have what probably doesn't represent any correlation.
Example on your input:
ID & anything : perfect isomorphism since all of ID are unique
Opt1 & ID : 4 mappings and 3 unique values; not a perfect
isomorphism, but not too far away.
Opt1 & Opt1 : ditto above
Opt1 & Type : 3 mappings & 3 unique values, perfect isomorphism
Opt2 & ID : 4 mappings & 3 unique values, not a perfect
isomorphism, but not too far away
Opt2 & Opt2 : ditto above
Opt2 & Type : ditto above
Type & anything: perfect isomorphism since all of ID are unique
For best results, you might do this procedure both ways - that is, comparing table1 to table2 and then comparing table2 to table1 - to look for bijective mappings. Otherwise, you can be thrown off by trivial cases... all values in the first are different (perfect isomorphism) or all values in the second are the same (perfect isomorphism). Note also that this technique provides a way of ranking, or measuring, how similar or dissimilar columns are.
Is this going in the right direction? By the way, this is O(ijk) where table1 has i columns, table 2 has j columns and each column has k elements. In theory, the best you could do for a method would be O(ik + jk), if you can find correlations without doing pairwise comparisons.