I have just discovered netlogo as a tool for modelling social processes. I need help for implementing a model I would like to test.
In my model, I have two types of agents, either angry or fearful.
Angry ones have a "risky" opinion between 6 and 10
Fearful ones have a "moderate" opinion between 1 and 5
They may change their opinion under the influence of their Moore neighbours, according to this rule:
to compute-influence
ask turtles
[ ask one-of turtles
[ set opinion ( sum [ opinion ] of turtles-on neighbors ) / count turtles-on neighbors
set opinion precision ( influencedOpinion) 1
set steps steps + 1
ifelse opinion < 6
[ set opinion "moderate" ]
[ set opinion "risky" ]
]
]
recolor-turtles
end
What I would like to do is modelling a different behaviour of angry and fearful agents, so that both follow the rule, but:
fearful agents update their opinion according to the rule with a probability of 65%
angry agents with a probability of 35%
Probably because of my inexperience, I did not find in the Netlogo manual anything about applying a probability to a rule...can anyone help?
thanks,
benedetta
during my model set up on innovation diffusion, another little programming issue occured to me in NetLogo. I would like to model that people more likely learn from people they are alike. Therfore the model considers an ability value that allocated to each agent:
[set ability random 20 ]
In the go procedure I then want them to compare their own ability value with the values from their linked neighbors.
So for example: ability of turtle1 = 5, ability of neighbor1 = 10, ability of neighbor2 = 4. Hence the (absoulute) differences are [ 5, 1]. Hence he will learn more from neighbor2 than from neighbor1.
But I don't know how to approach the problem of asking each single neighbor for the difference. As a first idea, I thought of doing it via a list-variable like [difference1, ..., difference(n)].
So far I only got an aggregated approach using average values, but this is not really consistent with recent social learning theory and might overlay situtations on which the agent has many different neighbors but one who is quite similar to him:
ask turtles
[
set ability random 20
set ability-of-neighbor (sum [ability] of link-neighbors / count link-neighbors)
set neighbor-coefficient (abs (ability - ability-of-neighbor))
;;the smaller the coefficient the more similar are the neighbors and the more the turtle learns from his neighbor(s)
]
Thank you again for your help and advice, and I really appreciate any comments.
Kind regards,
Moritz
I am having a bit of a time getting my head around what you want but here is a method of ranking link-neighbors.
let link-neighbor-rank sort-on [abs (ability - [ability] of myself)] link-neighbors
it produces a list of link neighbors in ascending order of difference of ability.
if you only want the closest neighbor use
let best min-one-of link-neighbors [abs (ability - [ability] of myself)]
I hope this helps.
At the moment I am working on an agent-based model about successful innovation diffusion in social networks. So far I am a newbie in agent-based-modeling and programing.
The main idea is to model social learning among farmers, hence the agents decision to adopt an innovation mainly depends on his personal network, meaning that if he is well connected and his neighbours are using the innovation successfully, he will more likely adopt than if he is located remotely in the network.
Beside the network related arguments about social learning, I would like to implement a time dimension, for example the longer the neighbors of an agent use the innovation successfully, the more likely the agent will adopt the innovation as well. But this is exactly the point where I am stuck right at the moment. My goal is to implement the following argument. The Pseudo Code looks like the following so far.
1) a turtles-own tick counter
...
ask turtles
[
ifelse [adopted? = true]
[set ime-adopted time-adopted + 1] [set time-adopted 0]
]
...
2) In a second precedure each agent should check how long his neighbours use this innovation (in terms of "check time-adopted of neighbors").
ask turtles with [not adopted?]
[
[ask link-neigbhors with [adopted?]
[...*(Here I dont know how to ask for the time adopted value)*]
;the agent will then sum up all values he got from his neighbors from "time-adopted"
set time-neighbors-adopted [sum all "time-adopted" values of neighbors]
]
;The agent will then implement these values into his personal utility
;function which determines if he adopts the innovation or not
set utility utiltiy + 0.3 * time-neighbors-adopted
]
Many thanks for your help and advice.
Kind regards,
Moritz
To get the sum of time the neighbors have adopted the innovation you only need one line because Netlogo is amazing.
set time-neighbors-adopted sum [time-adopted] of link-neighbors with [adopted?]
like that
I have a programming search problem and I am wondering if there are any algorithm, class, formula or procedure that can produce good search locations based on past results. (I’m guessing there is somewhere.) Or, would the solution I threw out there be good?
Let me try to explain with a simple example: Say there is a pond that that is 2 X 2 meters and 3 meters deep. I can basically put my fishing lure at any of the x,y,z locations (2 X 2 X 3 = 27 locations). Say I fish at each location for one hour (testing out the pond) and I ketch a different amount of fish at each of the 27 locations. Now, after I do that, the best place to logically fish is the location I caught the most fish BUT just because I caught the most fish there it does not mean it’s the best spot. I could have just been lucky. It would probably might be better to spend a big chunk of my time at that location but still adventure out a percentage of the time to confirm that is the best place.
One simple (and bad?) solution is just to fish 10 hour in every location and wherever the most fish are caught would probably be a good location but that would be a lot of wasted time(270 hours). Chances are if I ketch 15 finish at some x,y,z and none at x2,y2,z2 then I should not spend much time at the x2,y2,z2.
A second solution I was thinking about was to keep a tally of the hours spent and total fish caught at each location. And then do something like: (simple example)
float catchesByLocation[2,2,3] = {1}; //init all to 1
float totalTimeSpentByLocation[2,2,3] = {1}; //init all to 1
While(true) //never really ends
{
Do x = 0 to 2
Do y = 0 to 2
Do z = 0 to 3 //depth
{
float timeToSpendAtThisLoc = catchesByLocation[x,y,z] / totalTimeSpentByLocation[x,y,z];
float catches = GoFishing(x,y,z);
catchesByLocation[x,y,z] = catchesByLocation[x,y,z] + catches;
totalTimeSpentByLocation[x,y,z] = totalTimeSpentByLocation[x,y,z] + timeToSpendAtThisLoc;
}
}
With this solution, some amount of time will always be spend on the bad locations but as time goes on the bad locations will get a very small fraction of the total time.
So the question I have - is there some logical approach to do this? Maybe there is even an exact correct way to solve this using math? Any thoughts on ways to attack this problem? Sorry for the bad title, I cannot think of how to title it and am open to suggestions. Thank you for reading my question.
Your fish pond problem describes an instance of a class of problems called Explore/Exploit algorithms, or Multi-Armed Bandit problems; see e.g. http://en.wikipedia.org/wiki/Multi-armed_bandit. There is a large body of mathematical theory and algorithmic approaches, however the key assumptions are roughly as follows:
Fishing at the location where we have seen the highest number of
fish/hour optimizes the expected short term reward (this is what we should do if we had only one hour). However, if we continue fishing over some time, there might be better spots, but we have insufficient information.
To formalize this thought, we introduce a time discount (a fish
caught today is more valuable than one caught tomorrow, say, by a factor of 0.8). Our goal is
to maximize the total discounted fish, over a set period of fishing,
or over an infinite horizon.
Every hour, we decide whether to fish in the current best location, or to obtain more information on a new one. The simplest possible strategy ("epsilon-greedy") would fish at the currently best-looking location e.g. with probability 90%, and select another location randomly 10% of the time.
More sophisticated strategies would introduce a probability estimate that a location can be better than our current best location (this depends both on the estimate's expected value and its variance; i.e., total time spent, and fish/hour). Then, we base the decision on this probability, for a more informed choice (explore spots first that look most promising so far).
For the fish pond problem, a reasonable probability model might take neighborhood into account (locations (x,y,z) is likely similar to location (x-1,y,z), (x,y-1,z), etc).
I have to run the some code for different village map setting, now my grid is 20 * 20 patches but I will have up to 60 * 60 grid size as well, all patches now have 2 variables storage and food-level, only 10 patch will use their food-level variable, I can continue with the same settings or I can create 10 other turtles (trees for example) and assign them food-level and remove food-level for patches, which way do you think is better?
Neither approach seems obviously superior to me, given only the information that you've stated. The patches-only approach seems a little simpler, so I guess I'd stick with that for now, but keep the idea of switching in the back of your mind, in case you discover later once your model is more elaborate that there would be benefits to switching to turtles that just aren't apparent yet.
Note that if you need to do patches with [food-level > 0] a lot, it will take time each to scan all of the patches to find the patches with food. If that turns out to be a performance issue in your model, using turtles instead would solve it. But the ten patches with food on them are always the same, then you could run patches with [food-level > 0] once during setup and store the resulting patchset in a global variable, and that would also solve the performance issue.