This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
Before I write something about the problem, I need to let you know:
This problem is my homework (I had about 1 week to return working program)
I was working on this problem for about a week, every day, trying to figure out my own solution
I'm not asking for complete program; I need a general idea about the algorithm
Problem:
Given: a wordlist and a "grid", for example:
grid (X means any letter):
X X
XXXX
X X
XXXX
wordlist:
ccaa
baca
baaa
bbbb
You have to find example "solution" - is it possible to fit words from wordlist into a given grid? If there is at least one solution, print one (whichever correct). If no - print message, that there is no possible solution. For given example, there is a solution:
b c
baca
b a
baaa
It's hard for me to write everything that I've already tried (because English is not my native language and I also have a lot of papers with wrong ideas).
My naive algorithm works something like this:
First word needs just proper length, so find any (first?) word with proper length (I'm going to use given example grid and wordlist to demonstrate what I think):
c X
cXXX
a X
aXXX
For first common letter (on the crossing of 2 words) find any (first) word, that fit the grid (so, have proper length and common letter on proper position). If there is no such words, go back to (1) and take another first word. In the orginal example there is no word which starts with "c", so we go back to (1) and select next words (this step repeats few times until we have "bbbb" for 1st word). Now we have:
b X
bXXX
b X
bXXX
And we're looking for a word(s) which starts with "b", for example:
b X
baca
b X
bXXX
General process: try to find pairs of words which fit to the given grid. If there is no such words, go back to previous step and use another combination - if there is no such - there is no solution.
Everything above is chaotic, I hope that you understand at least problem description. I wrote a draft of an algorithm, but I'm not sure if that works and how to properly code this (in my case: c++). Moreover, there are cases (even in example above) that we need to find a word that depends on 2 or more other words.
Maybe I just can't see something obvious, maybe I'm too stupid, maybe... Well, I really tried to solve this problem. I don't know English well enough to precisely describe what I think about this problem, so I can't put here all my notes (I tried to describe one idea and it was hard). Believe or not, I've spend many long hours trying to figure out solution and I have almost nothing...
If you can describe a solution, or give a hint how to solve this problem, I would really appreciate this.
The corssword problem is NP-Complete, so your best shot is brute force: just try all possibilities, and stop when a possibility is a valid. Return failure when you exhausted all possible solutions.
reduction that prove that this problem is NP-Complete can be found in this article section 3.3
Brute force solution using backtracking could be: [pseudo code]:
solve(words,grid):
if words is empty:
if grid.isValudSol():
return grid
else:
return None
for each word in words:
possibleSol <- grid.fillFirst(word)
ret <- solve(words\{word},possibleSol)
if (ret != None):
return ret
return None
in here we assume fillFirst() is a function that fills the first space which was not already filled [first can actually be any consistent ordering of the empty spaces, but it should be consistent!] , and isValid() is returning a boolean indicating if the given grid is a valid solution.
I wrote a progam this morning. Here is a slightly more efficient version in pseudocode:
#pseudo-code
solve ( words , grid ) : solve ( words , grid , None )
solve ( words , grid , filledPositions ) :
if words is empty :
if grid is solved :
return grid
else :
raise ( no solution )
for ( current position ) as the first possible word position in grid
that is not of filledPositions :
# note : a word position must have no letters before the word
# 'before the word' means, eg, to the left of a horizontal word
# no letters may be placed over a ' '
# no letters may be placed off the grid
# note : a location may have two 'positions' : one across , one down
for each word in words :
make a copy of grid
try :
fill grid copy, with the current word, at the current position
except ( cannot fill position ) :
break
try :
return solve ( words\{word} , grid copy ,
filledPositions+{current position} )
except ( no solution ) :
break
raise ( no solution )
Here is my code for fitting a word horizontally in the grid : http://codepad.org/4UXoLcjR
Here are some things I used from the STL:
http://www.cplusplus.com/reference/algorithm/remove_copy/
http://www.cplusplus.com/reference/stl/vector/
Related
Question:
Given a piece of text like "This is a test"; how to build a machine learning model to get the number of word occurrences for example in this piece, word count is 4. After training, it is possible to predict text word count.
I know it is easy to write a program (like below pseudo code),
data: memory.punctuation['~', '`', '!', '#', '#', '$', '%', '^', '&', '*', ...]
f: count.word(text) -> count =
f: tokenize(text) --list-->
f: count.token(list, filter) where filter(token)<not in memory.punctuation> -> count
however in this question, we require to use machine learning algorithm. I wonder how machine can learn the concept of count (currently, we know machine learning is good at classification). Any idea and suggestions? Thanks in advance.
Failures:
We can use sth like word2vec (encoder) to build word vectors; if we consider seq2seq approach, we can train sth like This is a test <s> 4 <e> This is very very long sentence and the word count is greater than ten <s> 4 1 <e> (4 1 to represent the number 14). However, it does not work since attention model is used to get similar vector for example text translating (This is a test --> 这(this) 是(is) 一个(a) 测试(test)). It is hard to find relationship between [this ...] and 4 which is an aggregated number (i.e. model not convergent).
We know machine learning is good at classification. If we treat "4" as a class, the number of classes is infinite; if we do a tricky and use count/text.length as prediction, i have not got a model that fit even training data set (model not convergent); for example, if we use many short sentence to train the model, it will fail to predict long sentence length. And it may be related to an information paradox: we can encode data in a book as 0.x and use a machine to to mark a position on a rod to split it into 2 parts with length a and b, where a/b = 0.x; but we cannot find a machine.
What about a regression problem?
I think it would work quite well and that at the end it will output a nearly whole numbers all the time.
Also you can train a simple RNN to do the job, assuming you are using a hot one encoding and take an output from the last state.
If V_h is all zeros but the space index (which will be 1) and V_x as well, than the network will actually sum the spaces, and if c is 1 at the end so the output will be the number of words - For every length!
I think we can take it as a classification problem for a character being the input and if word breaker as the output.
In other words, at some time point t, we output whether the input character at the same time point is a word breaker (YES) or not (NO). If yes, then increase the word count. If no, then read the next character.
In modern English language I don't think there are going to be long words. So simple RNN model should do perhaps without the concern of vanishing gradient.
Let me know what you think!
Use NLTK for counting words,
from nltk.tokenize import word_tokenize
text = "God is Great!"
word_count = len(word_tokenize(text))
print(word_count)
I've been given as an assignment to write using prolog a solver for
the battleships solitaire puzzle. To those unfamiliar, the puzzle deals
with a 6 by 6 grid on which a series of ships are placed according to the provided
constraints on each row and column, i.e. the first row must contain 3 squares with ships, the second row must contain 1 square with a ship, the third row must contain 0 squares etc for the other rows and columns.
Each puzzle comes with it's own set of constraints and revealed squares, typically two. An example can be seen here:
battleships
So, here's what I've done:
step([ShipCount,Rows,Cols,Tiles],[ShipCount2,Rows2,Cols2,Tiles2]):-
ShipCount2 is ShipCount+1,
nth1(X,Cols,X1),
X1\==0,
nth1(Y,Rows,Y1),
Y1\==0,
not(member([X,Y,_],Tiles)),
pairs(Tiles,TilesXY),
notdiaglist(X,Y,TilesXY),
member(T,[1,2,3,4,5,6]),
append([X,Y],[T],Tile),
append([Tile],Tiles,Tiles2),
dec_elem1(X,Cols,Cols2),dec_elem1(Y,Rows,Rows2).
dec_elem1(1,[A|Tail],[B|Tail]):- B is A-1.
dec_elem1(Count,[A|Tail],[A|Tail2]):- Count1 is Count-1,dec_elem1(Count1,Tail,Tail2).
neib(X1,Y1,X2,Y2) :- X2 is X1,(Y2 is Y1 -1;Y2 is Y1+1; Y2 is Y1).
neib(X1,Y1,X2,Y2) :- X2 is X1-1,(Y2 is Y1 -1;Y2 is Y1+1; Y2 is Y1).
neib(X1,Y1,X2,Y2) :- X2 is X1+1,(Y2 is Y1 -1;Y2 is Y1+1; Y2 is Y1).
notdiag(X1,Y1,X2,Y2) :- not(neib(X1,Y1,X2,Y2)).
notdiag(X1,Y1,X2,Y2) :- neib(X1,Y1,X2,Y2),((X1 == X2,t(Y1,Y2));(Y1 == Y2,t(X1,X2))).
notdiaglist(X1,Y1,[]).
notdiaglist(X1,Y1,[[X2,Y2]|Tail]):-notdiag(X1,Y1,X2,Y2),notdiaglist(X1,Y1,Tail).
t(X1,X2):- X is abs(X1-X2), X==1.
pairs([],[]).
pairs([[X,Y,Z]|Tail],[[X,Y]|Tail2]):-pairs(Tail,Tail2).
I represent a state with a list: [Count,Rows,Columns,Tiles]. The last state must be
[10,[0,0,0,0,0,0],[0,0,0,0,0,0], somelist]. A puzzle starts from an initial state, for example
initial([1, [1,3,1,1,1,2] , [0,2,2,0,0,5] , [[4,4,1],[2,1,0]]]).
I try to find a solution in the following manner:
run:-initial(S),step(S,S1),step(S1,S2),....,step(S8,F).
Now, here's the difficulty: if i restrict myself to one type of ship parts by using member(T,[1])
instead of
member(T,[1,2,3,4,5,6])
it works fine. However, when I use the full range of possible values for T which are needed
later, the query never ends since it runs for too long. this puzzles me, since :
(a) it works for 6 types of ships but only for 8 steps instead of 9
(b) going from a single type of ship to 6 types increases the number
of options for just the last step by a factor of 6, which
shouldn't have such a dramatic effect.
So, what's going on?
To answer your question directly, what's going on is that Prolog is trying to sift through an enormous space of possibilities.
You're correct that altering that line increases the search space of the last call by a factor of six, note that the size of the search space of, say, nine calls, isn't proportional to 9 times the size of one call. Prolog will backtrack on failure, so it's proportional (bounded above, actually) to the size of the possible results of one call raised to the ninth power.
That means we can expect the size of the space Prolog needs to search to grow by at most a factor of 6^9 = 10077696 when we allow T to take on 6 times as many values.
Of course, it doesn't help that (as far as I was able to tell) a solution doesn't exist if we call step 9 times starting with initial anyways. Since that last call is going to fail, Prolog will keep trying until it's exhausted all possibilities (of which there are a great many) before it finally gives up.
As far as a solution goes, I'm not sure I know enough about the problem. If the value if T is the kind of ship that fits in the grid (e.g. single square, half of a 2-square-ship, part of a 3-square-ship) you should note that that gives you a lot more information than the numbers on the rows/columns.
Right now, in pseudocode, your step looks like this:
Find a (X,Y) pair that has non-zero markings on its row/column
Check that there isn't already a ship there
Check that it isn't diagonal to a ship
Pick a kind of ship-part for it to be.
I'd suggest you approach like this:
Finish any already placed ship-bits to form complete ships (if we can't: fail)
Until we're finished:
Find acceptable places to place ship
Check that the markings on the row/column aren't zero
Try to place an entire ship here. (instead of a single part)
By using the most specific information that we have first (in this case, the previously placed parts), we can reduce the amount of work Prolog has to do and make things return reasonably fast.
I have read a lot of threads here discussing edit-distance based fuzzy-searches, which tools like Elasticsearch/Lucene provide out of the box, but my problem is a bit different. Suppose I have a dictionary of words, {'cat', 'cot', 'catalyst'}, and a character similarity relation f(x, y)
f(x, y) = 1, if characters x and y are similar
= 0, otherwise
(These "similarities" can be specified by the programmer)
such that, say,
f('t', 'l') = 1
f('a', 'o') = 1
f('f', 't') = 1
but,
f('a', 'z') = 0
etc.
Now if we have a query 'cofatyst', the algorithm should report the following matches:
('cot', 0)
('cat', 0)
('catalyst', 0)
where the number is the 0-based starting index of the match found. I have tried the Aho-Corasick algorithm, and while it works great for exact matching and in the case when a character has relatively less number of "similar" characters, its performance drops exponentially as we increase the number of similar characters for a character. Can anyone point me to a better way of doing this? Fuzziness is an absolute necessity, and it must take in to account character similarities(i.e., not blindly depend on just edit-distances).
One thing to note is that in the wild, the dictionary is going to be really large.
I might try to use the cosine similarity using the position of each character as a feature and mapping the product between features using a match function based on your character relations.
Not a very specific advise, I know, but I hope it helps you.
edited: Expanded answer.
With the cosine similarity, you will compute how similar two vectors are. In your case the normalisation might not make sense. So, what I would do is something very simple (I might be oversimplifying the problem): First, see the matrix of CxC as a dependency matrix with the probability that two characters are related (e.g., P('t' | 'l') = 1). This will also allow you to have partial dependencies to differentiate between perfect and partial matches. After this I will compute, for each position the probability that the letter from each word is not the same (using the complement of P(t_i, t_j)) and then you can just aggregate the results using a sum.
It will count the number of terms that are different for a specific pair of words, and it allows you to define partial dependencies. Furthermore, the implementation is very simple and should scale well. This is why I am not sure if I misunderstood your question.
I am using Fuse JavaScript Library for a project of mine. It is a javascript file which works on JSON dataset. It is quite fast. Have a look at it.
It has implemented a full Bitap algorithm, leveraging a modified version of the Diff, Match & Patch tool by Google(from his site).
The code is simple to understand the algorithm implementation done.
I'm trying to solve the following problem:
I'm analyzing an image and I obtain from this analysis a set of segments
I want to know the intersection of these lines (best fit)
I'm using for this opencv's function cvSolve. For reasonably good input everything works fine.
The problem that I have comes from the fact that when I have just a single bad segment as input the result is different from the one expected.
Details:
Upper left image show the "lonely" purple lines influencing the result (all lines are used as input).
Upper right image shows how a single purple line (one removed) can influence the result.
Lower left image show what we want - the intersection of lines as expected (both purple lines eliminated).
Lower right image show how the other purple line (the other is removed) can influence the result.
As you can see only two lines and the result is completely different from the one expected. Any ideas on how to avoid this are appreciated.
Thanks,
Iulian
The algorithm you are using finds, as described in the link, the least square error solution to the problem. This means that if there are more intersection points, the result will be an average (for a reasonable definition of average) of the real solutions.
I would try an iterative solution: if the error of the first solution is too large, remove from the set of segments the one farthest to the solution, and iterate until the error is acceptably small. This should remove one of the many intersection point, and converge on the one with most lines nearby.
A general answer to this kind of problems is the RANSAC algorithm (question dealing with this), however it has a few disadvantages, for example you need to estimate things like "the expected number of outliers" beforehand. Another Problem I see with your sample is that removing the two green lines also results in a pretty good fit, so that might be a more general problem.
you can solve using SVD incase line1 =(x1,y1)-(x2,y2) ; line2 =(x2,y2)-(x3,y3)
let Ax = b where;
A = [-(y2-y1) (x2-x1);
-(y3-y2) (x3-x2);
.................
.................] -->(nx2)
x = transpose[s t] -->(2x1)
b = [-(y2-y1)x1 + (x2-x1)y1 ;
-(y3-y2)x2 + (x3-x2)y2 ;
........................
........................] --> (nx1)
Example; Matlab Code
line1=[0,10;5,10]
line2=[10,0;10,5]
line3=[0,0;5,5]
A=[-(line1(2,2)-line1(1,2)),(line1(2,1)-line1(1,1));
-(line2(2,2)-line2(1,2)),(line2(2,1)-line2(1,1));
-(line3(2,2)-line3(1,2)),(line3(2,1)-line3(1,1))];
b=[(line1(1,1)*A(1,1))+ (line1(1,2)*A(1,2));
(line2(1,1)*A(2,1))+ (line2(1,2)*A(2,2));
(line3(1,1)*A(3,1))+ (line3(1,2)*A(3,2))];
[U D V] = svd(A)
bprime = U'*b
y=[bprime(1)/D(1,1);bprime(2)/D(2,2)]
x=V*y
Recently I wrote a Ruby program to determine solutions to a "Scramble Squares" tile puzzle:
I used TDD to implement most of it, leading to tests that looked like this:
it "has top, bottom, left, right" do
c = Cards.new
card = c.cards[0]
card.top.should == :CT
card.bottom.should == :WB
card.left.should == :MT
card.right.should == :BT
end
This worked well for the lower-level "helper" methods: identifying the "sides" of a tile, determining if a tile can be validly placed in the grid, etc.
But I ran into a problem when coding the actual algorithm to solve the puzzle. Since I didn't know valid possible solutions to the problem, I didn't know how to write a test first.
I ended up writing a pretty ugly, untested, algorithm to solve it:
def play_game
working_states = []
after_1 = step_1
i = 0
after_1.each do |state_1|
step_2(state_1).each do |state_2|
step_3(state_2).each do |state_3|
step_4(state_3).each do |state_4|
step_5(state_4).each do |state_5|
step_6(state_5).each do |state_6|
step_7(state_6).each do |state_7|
step_8(state_7).each do |state_8|
step_9(state_8).each do |state_9|
working_states << state_9[0]
end
end
end
end
end
end
end
end
end
So my question is: how do you use TDD to write a method when you don't already know the valid outputs?
If you're interested, the code's on GitHub:
Tests: https://github.com/mattdsteele/scramblesquares-solver/blob/master/golf-creator-spec.rb
Production code: https://github.com/mattdsteele/scramblesquares-solver/blob/master/game.rb
This isn't a direct answer, but this reminds me of the comparison between the Sudoku solvers written by Peter Norvig and Ron Jeffries. Ron Jeffries' approach used classic TDD, but he never really got a good solution. Norvig, on the other hand, was able to solve it very elegantly without TDD.
The fundamental question is: can an algorithm emerge using TDD?
From the puzzle website:
The object of the Scramble Squares®
puzzle game is to arrange the nine
colorfully illustrated square pieces
into a 12" x 12" square so that the
realistic graphics on the pieces'
edges match perfectly to form a
completed design in every direction.
So one of the first things I would look for is a test of whether two tiles, in a particular arrangement, match one another. This is with regard to your question of validity. Without that method working correctly, you can't evaluate whether the puzzle has been solved. That seems like a nice starting point, a nice bite-sized piece toward the full solution. It's not an algorithm yet, of course.
Once match() is working, where do we go from here? Well, an obvious solution is brute force: from the set of all possible arrangements of the tiles within the grid, reject those where any two adjacent tiles don't match. That's an algorithm, of sorts, and it's pretty certain to work (although in many puzzles the heat death of the universe occurs before a solution).
How about collecting the set of all pairs of tiles that match along a given edge (LTRB)? Could you get from there to a solution, quicker? Certainly you can test it (and test-drive it) easily enough.
The tests are unlikely to give you an algorithm, but they can help you to think about algorithms, and of course they can make validating your approach easier.
dunno if this "answers" the question either
analysis of the "puzzle"
9 tiles
each has 4 sides
each tile has half a pattern / picture
BRUTE FORCE APPROACH
to solve this problem
you need to generate 9! combinations ( 9 tiles X 8 tiles X 7 tiles... )
limited by the number of matching sides to the current tile(s) already in place
CONSIDERED APPROACH
Q How many sides are different?
IE how many matches are there?
therefore 9 X 4 = 36 sides / 2 ( since each side "must" match at least 1 other side )
otherwise its an uncompleteable puzzle
NOTE: at least 12 must match "correctly" for a 3 X 3 puzzle
label each matching side of a tile using a unique letter
then build a table holding each tile
you will need 4 entries into the table for each tile
4 sides ( corners ) hence 4 combinations
if you sort the table by side and INDEX into the table
side,tile_number
ABcd tile_1
BCda tile_1
CDab tile_1
DAbc tile_1
using the table should speed things up
since you should only need to match 1 or 2 sides at most
this limits the amount of NON PRODUCTIVE tile placing it has to do
depending on the design of the pattern / picture
there are 3 combinations ( orientations ) since each tile can be placed using 3 orientations
- the same ( multiple copies of the same tile )
- reflection
- rotation
God help us if they decide to make life very difficult
by putting similar patterns / pictures on the other side that also need to match
OR even making the tiles into cubes and matching 6 sides!!!
Using TDD,
you would write tests and then code to solve each small part of the problem,
as outlined above and write more tests and code to solve the whole problem
NO its not easy, you need to sit and write tests and code to practice
NOTE: this is a variation of the map colouring problem
http://en.wikipedia.org/wiki/Four_color_theorem