during my model set up on innovation diffusion, another little programming issue occured to me in NetLogo. I would like to model that people more likely learn from people they are alike. Therfore the model considers an ability value that allocated to each agent:
[set ability random 20 ]
In the go procedure I then want them to compare their own ability value with the values from their linked neighbors.
So for example: ability of turtle1 = 5, ability of neighbor1 = 10, ability of neighbor2 = 4. Hence the (absoulute) differences are [ 5, 1]. Hence he will learn more from neighbor2 than from neighbor1.
But I don't know how to approach the problem of asking each single neighbor for the difference. As a first idea, I thought of doing it via a list-variable like [difference1, ..., difference(n)].
So far I only got an aggregated approach using average values, but this is not really consistent with recent social learning theory and might overlay situtations on which the agent has many different neighbors but one who is quite similar to him:
ask turtles
[
set ability random 20
set ability-of-neighbor (sum [ability] of link-neighbors / count link-neighbors)
set neighbor-coefficient (abs (ability - ability-of-neighbor))
;;the smaller the coefficient the more similar are the neighbors and the more the turtle learns from his neighbor(s)
]
Thank you again for your help and advice, and I really appreciate any comments.
Kind regards,
Moritz
I am having a bit of a time getting my head around what you want but here is a method of ranking link-neighbors.
let link-neighbor-rank sort-on [abs (ability - [ability] of myself)] link-neighbors
it produces a list of link neighbors in ascending order of difference of ability.
if you only want the closest neighbor use
let best min-one-of link-neighbors [abs (ability - [ability] of myself)]
I hope this helps.
Related
I am looking for any direction on how to implement the process below, you should not need to understand much at all about poker.
Below is a grid of possible two-card combinations.
Pocket pairs in blue, suited cards in yellow and off-suited in red.
Essentially there is a slider under the matrix which selects a percentage of possible combinations of two cards which a player could be dealt. However, you can see that it moves in a sort of linear fashion, towards the "better" cards.
These selections are also able to be parsed from strings e.g AA-88,AKo-AJo,KQo,AKs-AJs,KQs,QJs,JTs is 8.6% of the matrix.
I've looked around but cannot find questions about the specific selection process. I am not looking for "how to create this grid" or , more like how would I go about the selection process based on the sliding percentage. I am primarily a JavaScript developer but snippets in any language are appreciated, if applicable.
My initial assumptions are that there is some sort of weighting involved i.e. (favoured towards pairs over suited and suited over non-suited) or could it just be predetermined and I'm overthinking this?
In my opinion there should be something along the lines of "grouping(s)" AND "a subsequent weighting" process. It should also be customisable for the user to provide an optimal experience (imo).
For example, if you look at the below:
https://en.wikipedia.org/wiki/Texas_hold_%27em_starting_hands#Sklansky_hand_groups
These are/were standard hand rankings created back in the 1970s/1980s however since then, hand selection has become much more complicated. These kind of groupings have changed a lot in 30 years so poker players will want a custom user experience here.
But lets take a basic preflop scenario.
Combinations:- pairs = 6, suited = 4, nonsuited = 12
1 (AA:6, KK:6, QQ:6, JJ:6, AKs:4) = 28combos
2 (AQs:4, TT:6, AK:16, AJs:4, KQs:4, 99:6) = 40
3 (ATs:4, AQ:16, KJs:4, 88:6, KTs:4, QJs:4) = 38
....
9 (87s:4, QT:12, Q8s:4, 44:6, A9:16, J8s:4, 76s:4, JT:16) = 66
Say for instance we only reraise the top 28/1326 of combinations (in theory there should be some deduction here but for simplicity let's ignore that). We are only 3betting or reraising a very very obvious and small percentage of hands, our holdings are obvious at around 2-4% of total hands. So a player may want to disguise their reraise or 3bet range with say 50% of the weakest hands from group 9. As a basic example.
Different decision trees and game theory can be used with "range building" so a simple ordered list may not be suitable for what you're trying to achieve. depends on your programs purpose.
That said, if you just looking to build an ordered list then you could just take X% of hands that players open with, say average is 27% and run a hand equity calculator simulation tweaking the below GitHub to get different hand rankings. https://github.com/andrewprock/pokerstove
Theres also some lists here at the bottom this page.
http://www.propokertools.com/help/simulator_docs
Be lucky!
At the moment I am working on an agent-based model about successful innovation diffusion in social networks. So far I am a newbie in agent-based-modeling and programing.
The main idea is to model social learning among farmers, hence the agents decision to adopt an innovation mainly depends on his personal network, meaning that if he is well connected and his neighbours are using the innovation successfully, he will more likely adopt than if he is located remotely in the network.
Beside the network related arguments about social learning, I would like to implement a time dimension, for example the longer the neighbors of an agent use the innovation successfully, the more likely the agent will adopt the innovation as well. But this is exactly the point where I am stuck right at the moment. My goal is to implement the following argument. The Pseudo Code looks like the following so far.
1) a turtles-own tick counter
...
ask turtles
[
ifelse [adopted? = true]
[set ime-adopted time-adopted + 1] [set time-adopted 0]
]
...
2) In a second precedure each agent should check how long his neighbours use this innovation (in terms of "check time-adopted of neighbors").
ask turtles with [not adopted?]
[
[ask link-neigbhors with [adopted?]
[...*(Here I dont know how to ask for the time adopted value)*]
;the agent will then sum up all values he got from his neighbors from "time-adopted"
set time-neighbors-adopted [sum all "time-adopted" values of neighbors]
]
;The agent will then implement these values into his personal utility
;function which determines if he adopts the innovation or not
set utility utiltiy + 0.3 * time-neighbors-adopted
]
Many thanks for your help and advice.
Kind regards,
Moritz
To get the sum of time the neighbors have adopted the innovation you only need one line because Netlogo is amazing.
set time-neighbors-adopted sum [time-adopted] of link-neighbors with [adopted?]
like that
I've been looking into the k nearest neighbors algorithm as I might be developing an application that matches fighters (boxers) in the near future.
The reason for my question, is to figure out which would be the best approach/algorithm to use when matching fighters based on multiple parameters and constraints depending on the rule-set.
The relevant properties of each fighter are the following:
Age (Fighters will be assigned to an agegroup (15, 17, 19, elite)
Weight
Amount of fights
Now there are some rulesets for what can be allowed when matching fighters:
A maximum of 2 years in between the fighters (unless it's elite)
A maximum of 3 kilo's difference in weight
Now obviously the perfect match, would be one where all the attendees gets matched with another boxer that fits within the ruleset.
And the main priority is to match as many fighters with each other as possible.
Is K-nn the way to go or is there a better approach?
If so which?
This is too long for a comment.
For best results with K-nn, I would suggest principal components. These allow you to use many more dimensions and do a pretty good job of spreading the data through the space, to get a good neighborhood.
As for incorporating existing rules, you have two choices. Probably, the best way is to build it into you distance function. Alternatively, you can take a large neighborhood and build it into the combination function.
I would go with k-Nearest Neighbor search. Since your dataset is in a low dimensional space (i.e. 3), I would use CGAL, in order to perform the task.
Now, the only thing you have to do, is to create a distance function like this:
float boxers_dist(Boxer a, Boxer b) {
if(abs(a.year - b.year) > 2 || abs(a.weight - b.weight) > e)
return inf;
// think how you should use the 3 dimensions you have, to compute distance
}
And you are done...now go fight!
I'll start out by framing the problem I'm trying to solve. This is a health care problem so I'll use the terms 'member' and 'provider.' Basically, we want to try to contract providers until a certain percentage of members are "covered."
With that, let me define "coverage": a member is covered if there is a contracted provider within a given number of miles (let's call this maxd for maximum distance). So if our maxd=15, and there's a provider 12 miles away from me, I'm covered by that provider. Each member only has to be covered by one provider.
The goal here is to cover a certain percentage of numbers (let's say 90%) while having to contract the fewest number of providers. In this case, it's helpful to generate a list that, given our current state (current state being our list of contracted providers), shows us which providers will cover the most members that aren't already covered.
Here's how I'm doing this so far. I have a set contracted_providers that tells me who I have contracted. It may be empty. First, I find out what members are already covered and forget about them, since members only need to be covered once.
maxd = 15 # maximum distance to be covered, 15 for example
for p in contracted_providers:
for m in members:
if dist(p,m) <= maxd:
members.remove(m)
Then I calculate each provider's coverage (percentage-wise) on the remaning set of yet-uncovered members.
uncovered_members = members # renaming this for clarity
results = dict()
for p in not_contracted_providers:
count = 0
for m in uncovered_members: # this set now just contains uncovered members
if dist(p,m) <= maxd:
count++
results[p] = count/uncovered_members.size() # percentage of uncovered members that this provider would cover.
Ok, thanks for bearing with me through that. Now I can ask my question. These data sets are pretty big. On the larger end of the scale, we might have 10,000 providers and 40,000 members. Is there any better way to do this than brute-force?
I'm thinking something along the lines of a data structure that represents a heat map and then use that to find the best providers. Basically something that allows me to cheat a little bit and not have to calculate each individual distance for every provider, member combination. I've tried to research this but I don't even know what to search for, so any sort of direction would be helpful. If it's relevant, all locations are represented by geolocation (lat,long).
And as a side note, if brute force is pretty much the only option, would something like Hadoop be a good choice to do it quickly?
So in my current HubNet application turtles are organized in various graph-structures. Whether or not two clients can see each other depends on whether the corresponding turtles are connected in the graph.
I currently build the graphs based on the turtles who-numbers and have thus built in the assumption that if there are n turtles at any given point these are numbered from 0 to n-1. I expect that this might cause problems if, for instance, a client connects, then drops and then re-connects since this (if I'm not mistaken) will give that client a new who-number (and the old number is not reused). So I'm wondering if there is a way to make sure that the turtles are numbered in the way I want?
Dropping everyone and then resetting who-numbers would be one (bad) solution. Can you help me either by suggesting a better solution or how to implement the bad solution?
If you want to use who numbers, you'll need to hide the turtles instead of killing them. If that makes things awkward because you find yourself needing to refer to e.g. turtles with [not hidden?], then consider making two breeds, call them actives and inactives or something like that, and then when hiding a turtle do hide-turtle set breed inactives. Then you can always refer to the set of active turtles just as actives. When someone joins the simulation, give them an inactive turtle if there is one, and have it do show-turtle set breed actives.
Or, if you decide not to use who numbers, you'll need a new turtle variable, say you call it id. When you make a new turtle, do set id count turtles - 1. When a turtle dies, you'll need to reassign new id numbers so there aren't gaps anymore. Does it matter exactly what scheme you use for that? Do you need there to be any particular relationship between a turtle's old number and its new number? I can think of several possible different approaches to this. Here's one that assigns the id numbers in ascending order by who number:
let whos sort [who] of turtles
ask turtles [ set id position who whos ]
P.S. But I have to wonder, is all this numbering really necessary? In a normal NetLogo model, it's almost never necessary to use who numbers for anything. There's almost always a simpler way. Why do you feel you need to use numbering in this model? Perhaps you do need it, but I'm at least a little skeptical.