Check value in two lists in prolog - prolog

I want to implement a function to check whether some values are the same in two lists.
For example, win([1,2,3,4,5]). win([2,3,4,5,6].
Now I want to compare [2,3,4,5,10] with above one by one. If there is 4 elements are the same, then it will have further action. I know member() can use for exact match but dont know how to do the matching with part of them.

Related

How to traverse an unknown-length path in SurrealDB?

I want to recursively follow related records in SurrealDB, and I can't find the syntax to express it.
The simplest explanation of my goal is Neo4j/Cypher's variable length pattern matches. More generally, I want to start at a record and follow particular relations until I stop (either by number of steps or some other condition), where I don't know how many relation steps are needed between start and end.
The closest I can find is discussed here, in the section on 'No JOINs'. This doesn't fill my need, because the query specifies the number of steps between start and end. I'm imagining something like SELECT {->parent->person REPEATED 1..5} FROM person:tobie, which would find all of tobie's ancestors for 5 generations (person:tobie->parent->person, person:tobie->parent->person->parent->person, etc).
If this isn't part of SurrealQL's features, can you give me hints on other ways to get the same result? I've considered using the scripting functions, which seems powerful but off the beaten path.

Fast algorithm for approximate lookup on multiple keys

I have formulated a solution to a problem where I am storing parameters in a set of tables, and I want to be able to look up the parameters based on multiple criteria.
For example, if criteria 1 and criteria 2 can each be either A or B, then I'd have four potential parameters - one for each combination A&A, A&B, B&A and B&B. For these sort of criteria I could concatenate the fields or something similar and create a unique key to look up each value quickly.
Unfortunately not all of my criteria are like this. Some of the criteria are numerical and I only care about whether or not a result sits above or below a boundary. That also wouldn't be a problem on its own - I could maybe use a binary search or something relatively quick to find the nearest key above or below my value.
My problem is I need to include a number of each in the same table. In other words, I could have three criteria - two with A/B entries, and one with less-than-x/greater-than-x type entries, where x is in no way fixed. So in this example I would have a table with 8 entries. I can't just do a binary search for the boundary because the closest boundary won't necessarily be applicable due to the other criteria. For example, if the first two criteria are A&B, then the closest boundary might be 100, but if the if first two criteria are A&A, the closest boundary might be 50. If I want to look up A, A, 101, then I want it to recognise that 50 is the closest boundary that applies - not 100.
I have a procedure to do the lookup but it gets very slow as the tables get bigger - it basically goes through each criteria, checks if a match is still possible, and if so it looks at more criteria - if not, it moves on to check the next entry in the table. So in other words, my procedure requires cycling through the table entries one by one and checking for a match. I have tried to optimise that by ensuring the tables that are input to the procedure are as small as possible and by making sure it looks at the criteria that are least likely to match first (so that it checks each entry as quickly as possible) but it is still very slow.
The biggest tables are maybe 200 rows with about 10 criteria to check, but many are much smaller (maybe 10x5). The issue is that I need to call the procedure many times during my application, so algorithms with some initial overhead don't necessarily make things better. I do have some scope to change the format of the tables before runtime but I would like to keep away from that as much as possible (while recognising it may be the only way forward).
I've done quite a bit of research but I haven't had any luck. Does anyone know of any algorithms that have been designed to tackle this kind of problem? I was really hoping that there would be some clever hash function or something that means I won't have to cycle through the tables, but from my limited knowledge something like that would struggle here. I feel confident that I understand the problem well enough to gradually optimise the solution I have at the moment, but I want to be sure I've not missed a much better solution.
Apologies for the very long and abstract description of the problem - hopefully it's clear what I'm trying to do. I'll amend my question if it's unclear.
Thanks for any help.
this is basically what a query optimizer does in SQL land. There are fast, free, in memory databases for exactly this purpose. Checkout sqlite https://www.sqlite.org/inmemorydb.html.
It sounds like you are doing what is called a 'full table scan' for each query, which is like the last resort for a query optimizer.
As I've understood, you want to select entries by criteria like
A& not B & x1 >= lower_x1 & x1 < upper_x1 & x2 >= lower_x2 & x2 < lower_x2 & ...
The easiest way is to have them sorted by all possible xi, where i=1,2.. in separate sets, and have separated 'words' for various combination of A,B,..
The search will works as follows:
Select a proper world by Boolean criteria combination
For each i, find the population of lower_xi..upper_xi range in corresponding set (this operation is O(log(N))
Select i where the population is the lowest
While iterating instances through lower_xi..upper_xi range, filter the results by checking other upper/lower bound criteria (for all xj where j!=i)
Note that this s a general solution. Of course if you know some relation between your bound(s), you may use a list sorted by respective combination(s) of item values.

Function return false - Swi-Prolog

I am currently working on my proyect for my Programming classes. The work is to resolve a Skyline problem. A city's skyline is the outer contour of the silhouette formed by all the buildings in that city when viewed from a distance. So, basically, you take a list of buildings with 3 parameters each (initial position,final position, height) and you have to return the coordinates of the Skyline.
I have two base cases. First one is used when the list is empty. Second one is if there is only one building and the last one is used when I have two or more buildings in the list.
The function 'divide' receives a list of buildings and returns two lists of buildings.
My problem is:
divide([],[],[]).
divide([C|[]],[C|ed(X1,X2,H1)]):-
divide([],ed(X1,X2,H1),[]).
divide([ed(X1,X2,H1),ed(Y1,Y2,H2)|L],L1,L2):-
L1 = [ed(X1,X2,H1)|L1],
L2 = [ed(Y1,Y2,H2)|L2],
divide(L,L1,L2).
When I run the function 'divide' on the console it returns false as answer instead of returning a list. I just can't figure out what is wrong or where the problem might be. It should return two lists, not a 'false'.
An example:
?- divide([(1,2,3),(2,3,4),(1,4,5),(6,2,4)],X,Y).
false.
Any ideas?
Sorry for the bad english and thanks.
Always look at the warnings your Prolog system produces. If you ignore them, don't be surprised to fail. Here is another error:
divide([C|[]],[C|ed(X1,X2,H1)]):-
^^^^^^^^^^^^
This is not a well formed list.

Large foreseeable Sudoku, with 81 integers

I am in the making of a simple Sudoku for my exams at school. I have decided to have only one sudoku. This ones numbers are then shuffled around to make it look like a new one every time. The problem here is that I need to handle 81 integers. Some of them have to be visible, and some not. I can not myself see an easy way to handle these ints with ease, except with arrays, but that didn't go very well.
If you have any suggestions let me know :)
int[][]
Make it a 9x9 array like the visual sudoku.
Any non-visible number can be negated e.g. -5 instead of 5.
To validate the grid as having a solution check the Math.abs(value) (or whatever the absolute function is in your language of choice). Iterate from 1 to 9 in each 'square' and then for each row and column.
This will only let you know that you have a starting arrangement in which you can fill in numbers in a valid way it won't tell you that you can use logically to find that answer exclusively (e.g. an empty grid is valid but has thousands of solutions).

Algorithm for grouping RESTful routes

Given a list of URLs known to be somewhat "RESTful", what would be a decent algorithm for grouping them so that URLs mapping to the same "controller/action/view" are likely to be grouped together?
For example, given the following list:
http://www.example.com/foo
http://www.example.com/foo/1
http://www.example.com/foo/2
http://www.example.com/foo/3
http://www.example.com/foo/1/edit
http://www.example.com/foo/2/edit
http://www.example.com/foo/3/edit
It would group them as follows:
http://www.example.com/foo
http://www.example.com/foo/1
http://www.example.com/foo/2
http://www.example.com/foo/3
http://www.example.com/foo/1/edit
http://www.example.com/foo/2/edit
http://www.example.com/foo/3/edit
Nothing is known about the order or structure of the URLs ahead of time. In my example, it would be somewhat easy since the IDs are obviously numeric. Ideally, I'd like an algorithm that does a good job even if IDs are non-numeric (as in http://www.example.com/products/rocket and http://www.example.com/products/ufo).
It's really just an effort to say, "Given these URLs, I've grouped them by removing what I think it he 'variable' ID part of the URL."
Aliza has the right idea, you want to look for the 'articulation points' (in REST, basically where a parameter is being passed). Looking only for a single point of change gets tricky
Example
http://www.example.com/foo/1/new
http://www.example.com/foo/1/edit
http://www.example.com/foo/2/edit
http://www.example.com/bar/1/new
These can be grouped several equally good ways since we have no idea of the URL semantics. This really boils down to the question of this - is this piece of the URL part of the REST descriptor or a parameter. If we know what all the descriptors are, the rest are parameters and we are done.
Give a sufficiently large dataset, we'd want to look at the statistics of all URLs at each depth. e.g., /x/y/z/t/. We would count the number of occurrences in each slot and generate a large joint probability distribution table.
We can now look at the distribution of symbols. A high count in a slot means it's likely a parameter. We would start from the bottom, look for conditional probability events, ie., What is the probability of x being foo, then what is the probability y being something given x, etc. etc. I'd have to think more to determine a systematic way to extracting these, but it seems like a promisign start
split each url to an array of strings with the delimiter being '/'
e.g. http://www.example.com/foo/1/edit will give the array [http:,www.example.com,foo,1,edit]
if two arrays (urls) share the same value in all indecies except for one, they will be in the same group.
e.g. http://www.example.com/foo/1/edit = [http:,www.example.com,foo,1,edit] and
http://www.example.com/foo/2/edit = [http:,www.example.com,foo,2,edit]. The arrays match in all indices except for #3 which is 1 in the first array and 2 in the second array. Therefore, the urls belong to the same group.
It is easy to see that urls like http://www.example.com/foo/3 and http://www.example.com/foo/1/edit will not belong to the same group according to this algorithm.

Resources