Solving planning problems - algorithm

I'm new to the AI/ algorithm field and is currently trying to solve a problem, I've so far only implemented an A* path finding on a 2d grid array before.
The problem goes like this:
Consider a class of 40 students (20f,20m) with varying height and have their own seating preferences(row,column, or both), and a classroom 50 seats, each student must occupy a seat and the seats are being laid out as follow:
[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]
[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]
[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]
[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]
[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]
[ WHITE BOARD ]
To ideally seat them, a scoring graph has been elected:
No students seating directly in front : +4 points
Student seating directly in front is shorter by at least 2cm :+4 points
Student seating beside you is of opposite sex: +8 points
4 students of the same gender occupying a column: -10 points
A column with ascending height from the whiteboard: +20 points
Seating according to individual preferences: +2 points
The goal is to score the maximum points possible.
My idea is to use A* modified to suit the current problem:
Starting: all students unseated
Path cost: increment of points after the transition
Goal: all students seated
The problem here is, the maximum points possible is not known, and I can foresee that there might be scenario where the program fail to plan ahead, (the program might pick +8 and then followed by +4, where as a better way will be to pick +2 then followed by +20), I am aware that I can look for certain depth, say a depth of 5. This invites another question: what's the depth that I should use? I don't really want to visit all possible states.
1.How hard is this kind of problem? (from a scale of solving maze to solving chess/go)
2.Am I on the right path of solving this problem?

Constraint 6 look like it implies that this problem might be NP-complete or NP-hard. That means: the A* algorithm won't work (well) on this, because it's impossible (unless P = NP) to create a good admissible heuristic function. Admissible means that the heuristic function should always underestimate or equal the score of the optimal solution, it never overestimates.
If you need to include constraint 6, I'd recommend to use algorithms such as Tabu Search, Simulated Annealing or Late Acceptance, which work well on similar use cases, such as Dinner party seating and Course scheduling.
Without constraint 6, I think something as simple as a First Fit Decreasing algorithm can be designed to be optimal:
all even seats go to females, all odd seats go to males (if there's not enough seats for 1 gender, add the overflow in the other gender). Schedule both independently. While scheduling 1 gender, ignore the seats of the other gender.
Sort all students in 1 gender group according to height. Assign them one by one in decreasing height.
For each step, assign the largest unassigned student to the unassigned seat at the highest row (and secondarily the most on the left)
Constraint 2 might still not be optimal this way though... you might still need to apply some Tabu Search or Late Acceptance on it.

Related

rust custom debug output for external object?

I am debugging some code that depends on nalgebra. 3d Vectors in nalgebbra are 3x1 matrices, so when they are printed I get this:
[
[
-1.5769595,
76.86241,
40.108894,
],
],
[
[
-1.4947789,
76.70875,
39.909042,
],
],
[
[
-1.5849781,
76.93078,
40.18302,
],
],
I am doing 3D modelling so 99.999999% of all my use cases are going to treat vectors and covectors as just vectors.
I'd love to change this printing to just be:
[x, y, z],
Which would be much more compact and infinitely times better for my use case. Is this doable?

NetLogo: Assign Turtles Randomly but Equally to Different Groups

I used the code below to create 50 turtles and randomly assign them to one of four different strategies (i.e. a, b, c and d):
The problem is that when I decrease the number of created turtles or when I increase the number of strategies, I face a situation where at least one of the strategies is not taken by any turtle.
turtles-own [ my_strategy ]
to setup
;; create 50 turtles and assign them randomly
;; to one of four different strategies
create-turtles 50 [
set my_strategy one-of [ "a" "b" "c" "d" ]
]
end
I need your help here to:
1. Make sure that I do not face a situation where one or more strategies are not taken by any turtle.
2. Make sure that the number of turtles assigned to each strategy is roughly equal.
I tried to solve the problem by using the code below, but it did not work:
turtles-own [ my_strategy ]
to setup
let strategies [ "a" "b" "c" "d" ]
let turtles-num 51
let i 0
create-turtles turtles-num
while [ not any? turtles with [ my_strategy = 0 ] ] [
ifelse i < length strategies - 1 [ set i i + 1 ] [ set i 0 ]
ask n-of ceiling ( turtles-num / length strategies ) turtles with [ my_strategy = 0 ] [
set my_strategy item i strategies
]
]
Thank you for your help.
In general, you should never use who numbers for anything in NetLogo. However, this is one of the very few occasions where it's appropriate.
From comments, you actually want equal (or as close to equal as possible) numbers in each group so you don't need to calculate the number in each group. When turtles are created, they are created with sequential who numbers. So you can use the mod operator to assign them to each strategy in turn.
turtles-own [ my_strategy ]
to setup
;; create 50 turtles and assign them equally
;; to one of four different strategies
create-turtles 50 [
set my_strategy item (who mod 4) [ "a" "b" "c" "d" ]
]
end

NetLogo Efficient way to create fixed number of links

I have about 5000 agents (people) in my model. I want to give them an arbitrary number of friends and have reciprocal but random pairing. So if person A chooses person B then person B also chooses person A. My code works fine, but is fairly slow. I will likely want to increase both the number of friends and the number of people in the future. Any quicker suggestions?
ask people
[ let new-links friends - count my-links
if new-links > 0
[ let candidates other people with [ count my-links < friends ]
create-links-with n-of min (list new-links count candidates) candidates
[ hide-link ]
]
]
Note that friends is a global variable in the above code, but my eventual code will probably generalise to have wanted-number-of-friends as an attribute of people.
EDITED Added if new-links > 0 condition so that the nested ask is avoided when no candidates need to be found. This improved speed but still not really scaleable.
Great question. This is actually quite challenging to optimize. The problematic line is:
let candidates other people with [ count my-links < friends ]
This is slow because it has every agent checking with every other agent. With 5000 agents, that's 25,000,000 checks! Unfortunately, there isn't really a good way to optimize this particular line without some fancy data structures.
Fortunately, there is a solution that generalizes really well to generating any degree distribution in the network (which it sounds like that's what you ultimately want). Unfortunately, the solution doesn't translate super well to NetLogo. Here it is though:
let pairs [] ;; pairs will hold a pairs of turtles to be linked
while [ pairs = [] ] [ ;; we might mess up creating these pairs (by making self loops), so we might need to try a couple of times
let half-pairs reduce sentence [ n-values friends [ self ] ] of turtles ;; create a big list where each turtle appears once for each friend it wants to have
set pairs (map list half-pairs shuffle half-pairs) ;; pair off the items of half-pairs with a randomized version of half-pairs, so we end up with a list like: [[ turtle 0 turtle 5 ] [ turtle 0 turtle 376 ] ... [ turtle 1 turtle 18 ]]
;; make sure that no turtle is paired with itself
if not empty? filter [ first ? = last ? ] pairs [
set pairs []
]
]
;; now that we have pairs that we know work, create the links
foreach pairs [
ask first ? [
create-link-with last ?
]
]
It doesn't matter if friends here is a global or a turtle variable. The amount of time this takes depends on the number of times that it needs to try making pairs, which is random. Experimenting, I found that it was usually about 3 seconds with 5000 agents, each with degree 5. This is compared to about 60 seconds on my machine with your original way of doing this (which, for what it's worth, is the way I would recommend when using fewer agents).
After debugging (see NetLogo Efficiently create network with arbitrary degree distribution), the following version is relatively efficient. It constructs an agentset (called lonely below) for the turtles that still need links and deletes them as they get enough links. Removing individual turtles is more efficient than the nested process to create the candidate set each time.
The variable nFriends is a global (with a slider in the original model) that is the target number of links, identical for all agents.
let lonely turtles with [count my-links < nFriends]
ask turtles
[ set lonely other lonely
let new-links nFriends - count my-links
if new-links > 0
[ let chosen n-of min (list new-links count lonely) lonely
create-links-with chosen [ hide-link ]
ask chosen [ if count my-links = nFriends [ set lonely other lonely ] ]
]
]

Sort a patch set or agentset with two conditions in NetLogo

My patches have cost and gain attributes, and I would like to sort a list of patches with the minimum cost and the maximum gain. The sort-by function works for sorting on one attribute but how can I sort on both attributes?
To sort an agentset on many attributes, you can use either sort-by or sort-on:
patches-own [ cost gain ]
to sort-patches
ca
ask patches [
set cost random 100
set gain random 100
]
let patches-sorted-by sort-by [
([ cost ] of ?1 > [ cost ] of ?2) or
([ cost ] of ?1 = [ cost ] of ?2 and [ gain ] of ?1 < [ gain ] of ?2)
] patches
show map [[ list cost gain ] of ? ] patches-sorted-by
let patches-sorted-on sort-on [ (cost * -1000) + gain ] patches
show map [[ list cost gain ] of ? ] patches-sorted-on
end
Which one you prefer is up to you. Using sort-on requires to carefully construct your formula (i.e., the above would not work if you can have gains greater than 1000) but is slightly less verbose.
Edit: a more general way of sorting on multiple criteria
OK, this is probably overkill for your situation, but I came up with something a lot more general:
to-report sort-by-criteria [ criteria items ]
; `criteria` needs to be a task that returns a list of numbers
report sort-by [
compare-lists (runresult criteria ?1) (runresult criteria ?2)
] items
end
to-report compare-lists [ l1 l2 ]
report ifelse-value (empty? l1 or empty? l2) [ false ] [
ifelse-value (first l1 = first l2)
[ compare-lists but-first l1 but-first l2 ]
[ first l1 < first l2 ]
]
end
What you need to pass sort-by-criteria is a task that, given one of the items that you want to sort, will report a list of numbers according to which your items will be sorted.
In your case, you would use it like:
let sorted-patches sort-by-criteria (
task [[ list (-1 * cost) gain ] of ? ]
) patches
For two criteria, it's probably not worth using, but if you had a long list of criteria, it would probably be a lot easier and clearer to use than any of the other methods.

Is there a name for this type of algorithm?

I have a 2 dimensional array forming a table:
[color][number][shape ]
-------------------------
[black][10 ][square ]
[black][10 ][circle ]
[red ][05 ][triangle]
[red ][04 ][triangle]
[green][11 ][oval ]
and what I want to do is group largest common denominators, such that we get:
3 groups
group #1: color=black, number=10, shapes = [square, circle]
group #2: color=red, shape=triange, numbers = [05,04]
group #3: color=green, number=11, shape = oval
I wrote code that will handle a 2 "column" scenario, then I needed to adjusted it for 3 and I was figuring I might as well do it for n. I wanted to check first if there is some literature around this but I can't think of what to start looking for!
Data Clustering Algorithms is the closest thing I could find.
And your space is 3-dimensional where each point is identified by 3-tuple (color,number,shape).
http://home.dei.polimi.it/matteucc/Clustering/tutorial_html/
http://en.wikipedia.org/wiki/Cluster_analysis

Resources