I can't understand why this wont work:
EntryRow = input("Which row would you like to book for?")
Upper = EntryRow.upper()
while Upper != 'A' or 'B' or 'C' or 'D' or 'E':
print("That is not a row")
EntryRow = input("Which row would you like to book for?")
Upper = EntryRow.upper()
'!=' has precedence over 'or'. What your code really does is:
while (Upper != 'A') or 'B' or 'C' or 'D' or 'E':
Which is always true.
Try this instead:
while not Upper in ( 'A', 'B', 'C', 'D', 'E' ):
You need to explicitly write out each condition in full and combine them using and:
while Upper != 'A' and Upper != 'B' and ...
The interpreter takes 'B' and 'C' and so on to be independent conditionals which all evaluate to True, so your if statement is therefore always true.
You are using or the wrong way. (See Andrew's answer for the right way).
One possible shortcut is to use a containment check:
while Upper not in ('A', 'B', 'C', 'D', 'E'):
...
Related
Assume you have an unsorted list of distinct items. for example:
['a', 'z', 'g', 'i', 'w', 'p', 't']
You also get a list of Insert and remove operations. Insert operations are composed of the index to insert to, and the item to insert. For example: Insert(5, 's')
Remove operations are expressed using the element to remove. For example: Remove('s')
So a list of operations may look like this:
Insert ('s', 5)
Remove ('p')
Insert ('j', 0)
Remove ('a')
I am looking for the most efficient algorithm that can translate the list of operations so that they are index based. That means that there is no need to modify the insert operations, but the remove operations should be replaced with a remove operation stating the current index of the item to be removed (not the original one).
So the output of the example should look like this:
Starting set: ['a', 'z', 'g', 'i', 'w', 'p', 't']
Insert('s', 5) ( list is now: ['a', 'z', 'g', 'i', 'w', 's', 'p', 't']
Remove (6) (list is now: ['a', 'z', 'g', 'i', 'w', 's', 't']
Insert('j', 0) (list is now: ['j', 'a', 'z', 'g', 'i', 'w', 's', 't']
Remove(1) (list is now: ['j', 'z', 'g', 'i', 'w', 's', 't']
Obviously, we can scan for the next item to remove in the set after each operation, and that would mean the entire algorithm would take O(n*m) where n is the size of the list, and m is the number of operations.
The question is - is there a more efficient algorithm?
You can make this more efficient if you have access to all of the remove operations ahead of time, and they are significantly (context-defined) shorter than the object list.
You can maintain a list of items of interest: those to be removed. Look up their initial positions -- either in the original list, or upon insertion. Whenever an insertion is made at position n, each element of this list past that position gets its index increased by one; for each such deletion, decrease by one.
This is little different from methods already obvious; it's merely quantitatively faster, a potentially smaller m on the O(n*m) complexity.
Let's say we have two arrays m and n containing the characters from the set a, b, c , d, e. Assume each character in the set has a cost associated with it, consider the costs to be a=1, b=3, c=4, d=5, e=7.
for example
m = ['a', 'b', 'c', 'd', 'd', 'e', 'a']
n = ['b', 'b', 'b', 'a', 'c', 'e', 'd']
Suppose we would like to merge m and n to form a larger array s.
An example of s array could be
s = ['a', 'b', 'c', 'd', 'd', 'e', 'a', 'b', 'b', 'b', 'a', 'c', 'e', 'd']
or
s = ['b', 'a', 'd', 'd', 'd', 'b', 'e', 'c', 'b', 'a', 'b', 'a', 'c', 'e']
If there are two or more identical characters adjacent to eachother a penalty is applied which is equal to: number of adjacent characters of the same type * the cost for that character. Consider the second example for s above which contains a sub-array ['d', 'd', 'd']. In this case a penalty of 3*5 will be applied because the cost associated with d is 5 and the number of repetitions of d is 3.
Design a dynamic programming algorithm which minimises the cost associated with s.
Does anyone have any resources, papers, or algorithms they could share to help point me in the right direction?
I want to build a binary matrix in R for each geohash that i have in a dataframe using alphabet letters and numbers.
Therefore, i want that each geohash is matched with 1 if the corresponding letter or number is matched with alphabet or numbers, or 0 if not, in order to build a complete binary matrix for each geohash.
The reason why I want to build these matrices is because i want to apply an encode and decode deep learning algorithm for events prediction
Thank you
alphabetandnumbers <- c('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i',
'j', 'k','l', 'm', 'n', 'o', 'p', 'q', 'r',
's',
't', 'u', 'v', 'w','x', 'y', 'z',
0,1,2,3,4,5,6,7,8,9)
names(df2sub)
t2 <- table(alphabetandnumbers, seq_along(df2sub$geohash))
t2[t2 > 1] <- 1
t2[1:1000]
I tried also this 'tactics' without any success
V1 <- df2sub[['geohash']]
V2 <- array(alphabetandnumbers, dim = length(alphabetandnumbers))
m <- as.matrix(V1)
id <- cbind(rowid = as.vector(t(row(m))),
colid = as.vector(t(m)))
id <- id[complete.cases(id), ]
id
out <- matrix(0, nrow = nrow(m), ncol = max(m))
out[id] <- 1
Not a technical question, just asking for a point in the right direction for my research.
Are there any models that address the following problem:
Finding the route starting from X that passes through A, B, C in the most efficient order. In the case below, X,A,C,B is the optimum path (12).
Thanks
you may want to look at traveling salesman. There are a lot of resources for how to implement this as it is a very common programming problem.
https://en.wikipedia.org/wiki/Travelling_salesman_problem
This is an implementation of Dijkstra's Algorithm in Python:
def find_all_paths(graph, start, end, path=[]):
required=('A', 'B', 'C')
path = path + [start]
if start == end:
return [path]
if start not in graph:
return []
paths = []
for node in graph[start]:
if node not in path:
newpaths = find_all_paths(graph, node, end, path)
for newpath in newpaths:
if all(e in newpath for e in required):
paths.append(newpath)
return paths
def min_path(graph, start, end):
paths=find_all_paths(graph,start,end)
mt=10**99
mpath=[]
print '\tAll paths:',paths
for path in paths:
t=sum(graph[i][j] for i,j in zip(path,path[1::]))
print '\t\tevaluating:',path, t
if t<mt:
mt=t
mpath=path
e1=' '.join('{}->{}:{}'.format(i,j,graph[i][j]) for i,j in zip(mpath,mpath[1::]))
e2=str(sum(graph[i][j] for i,j in zip(mpath,mpath[1::])))
print 'Best path: '+e1+' Total: '+e2+'\n'
if __name__ == "__main__":
graph = {'X': {'A':5, 'B':8, 'C':10},
'A': {'C':3, 'B':5},
'C': {'A':3, 'B':4},
'B': {'A':5, 'C':4}}
min_path(graph,'X','B')
Prints:
All paths: [['X', 'A', 'C', 'B'], ['X', 'C', 'A', 'B']]
evaluating: ['X', 'A', 'C', 'B'] 12
evaluating: ['X', 'C', 'A', 'B'] 18
Best path: X->A:5 A->C:3 C->B:4 Total: 12
The 'guts' is recursively finding all paths and filtering to only those paths that visit the required nodes ('A', 'B', 'C'). The paths are then summed up to find the minimum path expense.
There are certainly be more efficient approaches but it is hard to be simpler. You asked for a model, so here is a working implementation.
I need to generate permutations of with ordering restrictions on ordering
for example, in the list [A,B,C,D]
A must always come before B, and C must always come before D. There also may or may not be E,F,G... that has no restrictions.
The input would look like this: [[A,B],[C,D],[E],[F]]
Is there a way to do this without computing unnecessary permutations or backtracking?
Normally, a permutations algorithm might look somewhat like this (Python):
def permutations(elements):
if elements:
for i, current in enumerate(elements):
front, back = elements[:i], elements[i+1:]
for perm in permutations(front + back):
yield [current] + perm
else:
yield []
You iterate the list, taking each of the elements as the first element, and combining them with all the permutations of the remaining elements. You can easily modify this so that the elements are actually lists of elements, and instead of just using the current element, you pop the first element off that list and insert the rest back into the recursive call:
def ordered_permutations(elements):
if elements:
for i, current in enumerate(elements):
front, back = elements[:i], elements[i+1:]
first, rest = current[0], current[1:]
for perm in ordered_permutations(front + ([rest] if rest else []) + back):
yield [first] + perm
else:
yield []
Results for ordered_permutations([['A', 'B'], ['C', 'D'], ['E'], ['F']]):
['A', 'B', 'C', 'D', 'E', 'F']
['A', 'B', 'C', 'D', 'F', 'E']
['A', 'B', 'C', 'E', 'D', 'F']
[ ... some 173 more ... ]
['F', 'E', 'A', 'C', 'D', 'B']
['F', 'E', 'C', 'A', 'B', 'D']
['F', 'E', 'C', 'A', 'D', 'B']
['F', 'E', 'C', 'D', 'A', 'B']
Note, though, that this will create a lot of intermediate lists in each recursive call. Instead, you could use stacks, popping the first element off the stack and putting it back on after the recursive calls.
def ordered_permutations_stack(elements):
if any(elements):
for current in elements:
if current:
first = current.pop()
for perm in ordered_permutations_stack(elements):
yield [first] + perm
current.append(first)
else:
yield []
The code might be a bit easier to grasp, too. In this case, you have to reserve the sublists, i.e. call it as ordered_permutations_stack([['B', 'A'], ['D', 'C'], ['E'], ['F']])