Maple display settings for matrix output - settings

I am working in maple and my output is matrices, which were displayed in the workbook.
A coworker was looking at my code and changed a setting for the output of matrices (in format or view, it was a drop down) that changed the format of the output matrix. He said it would run faster. The matrix is still visible but in a collapsed way.
I cannot remember the setting he changed. Does anyone know which setting I am talking about?

There is an interface setting rtablesize which controls how small a Matrix needs to be before it will display in full. If either dimension is greater than the value of that setting then a placeholder is displayed instead.
For example,
restart;
interface(rtablesize); # default is 10
10
m:=LinearAlgebra:-RandomMatrix(6,3);
[ 57 -32 99]
[ ]
[ 27 -74 29]
[ ]
[-93 -4 44]
m := [ ]
[-76 27 92]
[ ]
[-72 8 -31]
[ ]
[ -2 69 67]
interface(rtablesize=5):
m;
[ 6 x 3 Matrix ]
[ Data Type: anything ]
[ Storage: rectangular ]
[ Order: Fortran_order ]
I am not aware of any way to change this setting from the menubar, with just the mouse.
There is an option in the menubar (eg. On MS-Windows, from the menu item Tools->Options) which controls the elision of long output (ie. long results get displayed with triple-dot ellipsis). But that is likely not what you're after.
Note that making such changes can affect how quickly the interface (eg. the "GUI" or Graphical User Interface) renders large results. But it doesn't affect computation speed.
If you don't want large results displayed then the easiest thing to do is to terminate your Maple statements (commands) with a full colon (instead of terminating with a semicolon, or with nothing).

Related

How to distribute agents' attributes randomly according to specific probabilities in Netlogo?

I am relatively new to Netlogo, having completed only a handful of models. Currently working on one for my dissertation where I need to distribute agents' attributes randomly according to specific probabilities, some at the onset of the simulation, other attributes to be distributed throughout. This is connected with an extension of the trust game, for those familiar with it. I completed the conceptual model with a colleague who doesn't use Netlogo, so I am a bit stuck at the moment.
I think the rnd extension might be useful, but I can't quite figure out how to use it. My apologies if this seems redundant to any of you, but I really hope to get some help here.
extensions [ rnd]
;; divides agents into two types
breed [ sexworkers sexworker ]
breed [ officers officer ]
;; determines attributes of agents
sexworkers-own
[ assault? ;; is assaulted
trust? ;; probability to trust police to report assault
protection? ;; probability of good experience with police during report
prob-trust ] ;; probability to trust overall
officers-own
[ behavior ] ;; probability of treating sex workers well/badly during report
This is the start of the model, and then I want to distribute the attributes according to specific probabilities. I honestly haven't found a way to do this that works as I intend it to.
What I want is to start off, for every sex worker alike, a probability of 0.01 to be assaulted (prob-assault; assault?=true). Afterwards, with each tick, there is again the chance of 0.01 for sex workers to be assaulted.
Afterwards, in the subset of assault?=true, there is then a probability to report the assault (prob-report, 0.5. This is expressed by trust?=true/false. Within the subset of those who report, there is then a final probability of having a good/bad experience with police (prob-protection), here protection?=true/false.
These three attributes should be randomly distributed according to the probabilities, and also then result in a combined probability to trust police in the future, prob-trust. (prob-trust = prob-assault + prob-report + prob-protection).
What I have done (without rnd extension so far is this:
;; determines sex workers' behavior
ask sexworkers [ move ]
ask sexworkers [ victimize ]
ask sexworkers [ file ]
to victimize
ask sexworkers [
ifelse random-float 1 <= 0.0001
[ set assault? 1 ]
[ set assault? 0 ]
]
end
to file
ask sexworkers with [ assault? = 1 ] [
ifelse random-float 1 <= 0.5
[ cooperate ]
[ avoid ]
]
end
to cooperate
ask sexworkers [ set trust? 1 ]
end
to avoid
ask sexworkers [ set trust? 0 ]
end
What happens at the moment though is that there is no variation in attributes, all sex workers seem to have no assault and trust/not trust varying all simultaneously. I am not sure what is going on.
(1) You don't need the rnd extension for anything you are trying to do here. If you just want to take some action with some probability then your approach of if random-float 1 < <probablility value> is the correct approach. The rnd extension is when you want to get into weighted probability, for example choosing agents based on their income.
(2) NetLogo recognises true and false (capitalisation does not matter) as specific truth values. You should not use 1 and 0 as proxies for true and false. There are several advantages to using the truth values directly. The most obvious is readability, you can have statements like set trust? true and if trust? = true [do something]. More compactly, you can simply say if trust? [do something]. Other advantages include access to logical operators such as not and and for your conditions.
With regard to your actual problem of every agent having the same behaviour, you have nested your ask turtles type statements. For example, you have:
to file
ask sexworkers with [ assault? = 1 ] [
ifelse random-float 1 <= 0.5
[ cooperate ]
[ avoid ]
]
end
If you substitute the cooperate and avoid procedures into this code, you would get:
to file
ask sexworkers with [ assault? = 1 ] [
ifelse random-float 1 <= 0.5
[ ask sexworkers [ set trust? 1 ] ]
[ ask sexworkers [ set trust? 0 ] ]
]
end
So, if your random number is, say, 0.4 then ALL your sexworkers will have trust set to 1, not just the particular sexworker who 'rolled the die'.
You either need:
to file
ask sexworkers with [ assault? = 1 ] [
ifelse random-float 1 <= 0.5
[ set trust? true ]
[ set trust? false ]
]
end
Or you need:
to cooperate
set trust? true
end
to avoid
set trust? false
end
Use the first option if there's not really anything else that is being done. Use the second option if setting the trust? value is just one of many actions that the turtle should take when it is cooperating or avoiding.

code with to-word and to-path in Red language

I am trying to create 2 panels through single function using compose:
make-panel: func [sentchar][
probe compose/deep [
text "N1:"
(to-set-word rejoin["fld1" sentchar ":"]) field ; TO BE NAMED fld1A and fld1B for 2 panels
text "N2: "
(to-set-word rejoin["fld1" sentchar ":"]) field ; TO BE NAMED fld2A and fld2B for 2 panels
text "Product: "
(to-set-word rejoin ["txt_out" sentchar ":"]) text ; TO BE NAMED txt_outA and txt_outB for 2 panels
button "Get product" [
x: to-path to-word (rejoin ["face/parent/pane/fld1" sentchar "/text"])
y: to-path to-word (rejoin ["face/parent/pane/fld2" sentchar "/text"])
(to-set-path (to-path rejoin ["face/parent/pane/txt_out" sentchar "text"] ))
form multiply get x get y ] ] ]
view compose [
(make-panel "A") return
(make-panel "B") return ]
However, I am getting errors regarding to-word and to-path even though I have tried different combinations. Where is the problem?
Your error is in trying to create a word with a "/" character.
>> to-word "foo/bar"
*** Syntax Error: invalid character in: "foo/bar"
*** Where: to
*** Stack: to-word
My second inclination is that you shouldn't be using strings to compose value references—if nothing else you lose binding. Can try the following:
to path! compose [face parent pane (to word! rejoin ["fld2" sentchar]) text]
My first inclination is that you're overcomplicating this, but that's beyond the scope of your question.
Update:
I will attempt to address some of the other issues in this code:
Naming
A note on make-panel—it's a misnomer as you are not making a panel, just grouping some element specs together. For the purposes of this answer, I'll use the name make-row. Also, I will never have any love for names like fld1 or tout (which is an actual word!) but will persevere.
Dynamic Named Selectors
As I mentioned above, you are always better off starting with words vs. strings as in Rebol/Red, words acquire context during evaluation—words loaded from strings do not. For example:
make object! [
foo: "bar"
probe get first [foo] ; "bar"
probe get first load "[foo]" ; error
]
As you're creating three new words, let's do that explicitly:
make-row: function [row-id [string!]][
fld1: to word! rejoin ["fld1-" row-id]
fld2: to word! rejoin ["fld2-" row-id]
tout: to word! rejoin ["tout-" row-id] ; note that 'tout is an English word
...
]
From here, we can start to build unique references in our spec.
make-row: func [row-id [string!] /local fld1 fld2 tout][
fld1: to word! rejoin ["fld1-" row-id]
fld2: to word! rejoin ["fld2-" row-id]
tout: to word! rejoin ["tout-" row-id]
compose/deep [
text "N1:"
(to set-word! fld1) field
text "N2:"
(to set-word! fld2) field
text "Product:"
(to set-word! tout) text
button "Get Product" [
...
]
return
]
]
Now we get into a sticky area with this button action:
x: to-path to-word (rejoin ["face/parent/pane/fld1" sentchar "/text"])
y: to-path to-word (rejoin ["face/parent/pane/fld2" sentchar "/text"])
(to-set-path (to-path rejoin ["face/parent/pane/tout" sentchar "text"] ))
form multiply get x get y ] ] ]
I think can express in pseudo-code what you're trying to do:
Product = text of product of N1 for this row * N2 for this row
The main error in your code here is you're mixing proximity references with your named references. If you examine face/parent/pane, it has no fld1*, fld2* or tout* references in there, it's just a block of face objects. As you've gone to the effort to make unique names, let's roll with that for the moment. Remember, we're still deep in a compose/deep operation:
x: get in (fld1) 'data
y: get in (fld2) 'data
set in (tout) 'text form x * y
We're much more concise now and everything should be working (note that 'data gives you the loaded value of 'text).
Proximity Selectors
My concern though by this point is we've a lot new words floating about and we needed that x and y. So let's return to the idea of proximity.
When you look at your composed View spec:
view probe compose [
(make-row "A")
(make-row "B")
]
You'll see that your main view face will contain a lot of children. To find faces within proximity of the button you're clicking, we first need to find the button within the face. Let's do this:
button "Get Product" [
this: find face/parent/pane face
]
And as there's six preceding faces associated with the button, let's go to the beginning of this set:
button "Get Product" [
this: skip find face/parent/pane face -6
]
Now we can do our calculations based on proximity:
button "Get Product" [
here: find face/parent/pane face
here/6/text: form here/2/data * here/4/data
]
Boom! We have the same product with only one word here as opposed to rows-count * 3 + x + y. Awesome!
As we're not generating any additional words, we don't even need a function to generate our rows, boils down to the following:
row: [
text "N1:" field
text "N2: " field
text "Product: " text 100
button "Get product" [
; go back six faces from current face
here: skip find face/parent/pane face -6
here/6/text: form here/2/data * here/4/data
]
return
]
view compose [
(row)
(row)
]
Group Selectors
As you seem to have complex needs and can't always enumerate the fields you need, you can use the extra field to group fields together. We can do this by using block to contain the row-id and the field-id:
make-row: func [row-id][
compose/deep [
text "N1:" field extra [(row-id) "N1"]
text "N2: " field extra [(row-id) "N2"]
text "Product: " text 100 extra [(row-id) "Output"]
button "Get product" extra (row-id) [
...
]
return
]
]
view compose [
(make-row "A")
(make-row "B")
]
Within the button action, we can collect all of the faces associated with the row:
faces: make map! collect [
foreach kid face/parent/pane [
if all [
block? kid/extra
face/extra = kid/extra/1
][
keep kid/extra/2
keep kid
]
]
]
This gives you a nice map! with all associated faces and a simple calculation:
faces/("Output")/text: form faces/("N1")/data * faces/("N2")/data
If you're only going to use it for the product, then you don't even need to collect:
product: 0
foreach kid face/parent/pane [
if all [
block? kid/extra
face/extra = kid/extra/1
][
product: product + kid/value
]
]
a real challenge
make-panel: func [sentchar][
compose/deep [
text "N1:" (to-set-word rejoin['fld1 sentchar ]) field ; TO BE NAMED fld1A and fld1B for 2 panels
text "N2:" (to-set-word rejoin['fld2 sentchar ]) field ; TO BE NAMED fld2A and fld2B for 2 panels
text "Product: " (to-set-word rejoin ['tout sentchar]) text ; TO BE NAMED toutA and toutB for 2 panels
button "Get product" [
x: ( to-path reduce [ to-word rejoin ["fld1" sentchar] 'text ])
y: (to-path reduce [ to-word rejoin ["fld2" sentchar] 'text ])
(to-set-path reduce [to-word rejoin ["tout" sentchar] 'text]) form multiply load x load y
]
]
]
view v: compose [
(make-panel "A") return
(make-panel "B") return
]
Of course, you do not need the intermediary words x and y. But this you can do by yourself.

Get all pixels location

I using PixelSearch function, I know how to find 1 pixel that match to my criteria, but the problem is that I would like to find all pixels of specific color and add this to array, so after I can use it to rand one and click on it.
Source code:
Local $aCoord = PixelSearch(0, 0, $clientSize[0], $clientSize[1], 0x09126C, 10, 1, $hWnd)
If Not #error Then
; add to array and search next
Else
GUICtrlSetData($someLabel, "Not Found")
EndIf
I want to find ALL PIXELS, not one "the first". How can I do this? Am I missing something?
This can't be done using PixelSearch because it stops executing when a matching pixel is found.
It can be done by looping PixelGetColor over your area. Something like:
For $x = 0 To $clientSize[0] Step 1
For $y = 0 To $clientSize[1] Step 1
If PixelGetColor($x,$y,$hWnd) = 0x09126C Then
;Add $x and $y to Array using _ArrayAdd() (or whatever you prefer)
EndIf
Next
Next
This might feel slower than PixelSearch because it now has to scan the entire area, instead of stopping at the first match, but it shouldn't be, since PixelSearch is based on the same principle.

Selecting only a small amount of trials in a possibly huge condition file in a pseudo-randomized way

I am using the PsychoPy Builder and have used the code only rudimentary.
Now I'm having a problem for which I think coding is inevitable, but I have no idea how to do it and so far, I didn't find helpful answers in the net.
I have an experiment with pictures of 3 valences (negative, neutral, positive).
In one of the corners of the pictures, additional pictures (letters and numbers) can appear (randomly in one of the 4 positions) in random latencies.
All in all, with all combinations (taken the identity of the letters/numbers into account), I have more than 2000 trial possibilities.
But I only need 72 trials, with the condition that each valence appears 24 times (or: each of the 36 pictures 2 times) and each latency 36 times. Thus, the valence and latency should be counterbalanced, but the positions and the identities of the letters and numbers can be random. However, in a specific rate, (in 25% of the trials) no letters/ numbers should apear in the corners.
Is there a way to do it?
Adding a pretty simple code component in builder will do this for you. I'm a bit confused about the conditions, but you'll probably get the general idea. Let's assume that you have your 72 "fixed" conditions in a conditions file and a loop with a routine that runs for each of these conditions.
I assume that you have a TextStim in your stimulus routine. Let's say that you called it 'letternumber'. Then the general strategy is to pre-compute a list of randomized characters and positions for each of the 72 trials and then just display them as we move through the experiment. To do this, add a code component to the top of your stimulus routine and add under "begin experiment":
import random # we'll use this module to pick random elements from below
# Indicator sequence, specifying whether letter/number should be shown. False= do not show. True = do show.
show_letternumber = [False] * 18 + [True] * 54 # 18/72=25%, 54/72=75%.
random.shuffle(show_letternumber)
# Sets of letters and numbers to present
char_set = ['1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g'] # ... and so on.
char_trial = [random.choice(char_set) if show_char else '' for show_char in char_set] # list with characters
# List of positions
pos_set = [(0.5, 0.5),(-0.5, 0.5),(-0.5,-0.5),(0.5, -0.5)] # coordinates of your four corners
pos_trial = [random.choice(pos_set) for char in char_trial]
Then under "begin routine" in the code component, set the lettersnumbers to show the value of character_trial for that trial and at the position in pos_trial.
letternumbers.pos = pos_trial[trials.thisN] # set position. trials.thisN is the current trial number
letternumbers.text = char_trial[trials.thisN] # set text
# Save to data/log
trials.addData('pos', pos_trial[trials.thisN])
trials.addData('char', char_trial[trials.thisN])
You may need to tick "set every repeat" for the lettersnumbers component in Builder for the text to actually show.
Here is a strategy you could try, but as I don't use builder I can't integrate it into that work flow.
Prepare a list that has the types of trials you want in the write numbers. You could type this by hand if needed. For example mytrials = ['a','a',...'d','d'] where those letters represent some label for the combination of trial types you want.
Then open up the console and permute that list (i.e. shuffle it).
import random
random.shuffle(mytrials)
That will shift the mytrials around. You can see that by just printing that. When you are happy with that paste that into your code with some sort of loop like
t in mytrials:
if t == 'a':
<grab a picture of type 'a'>
elseif t == 'b':
<grab a picture of type 'b'>
else:
<grab a picture of type 'c'>
<then show the picture you grabbed>
There are programmatic ways to build the list with the right number of repeats, but for what you are doing it may be easier to just get going with a hand written list, and then worry about making it fancier once that works.

Seeking alternative strategy for forming assortative network in netlogo

In my netlogo simulation, I have a population of turtles that have several sociologically-relevant attributes (e.g., gender, race, age, etc.). I want them to form a network that is assortative on multiple of these attributes. The strategy that I’ve been trying to use to accomplish this is to: (i) form all possible links among the turtles, (ii) calculate a propensity to pair index for each of these “potential” links which is a weighted linear combination of how similar two turtles on the relevant attributes, and (iii) then run a modified version the “lottery” code from the models library so that links with higher propensities to pair are more likely to be selected, the selected links are then set to be “real” and all the potential links that didn’t win the lottery (i.e., are not set to real) are deleted. The problem that I’m running into is that forming all possible links in the first steps is causing me to run out of memory. I’ve done everything I can to maximize the memory that netlogo can use on my system, so this question isn’t about memory size. Rather, it’s about modeling strategy. I was wondering whether anyone might have a different strategy for forming a network that is assortative on multiple turtle attributes without having to form all potential links. The reason I was forming all potential links was because it seemed necessary to do so in order to calculate a propensity to pair index to use in the lottery code, but I’m open to any other ideas and any suggestions would be greatly appreciated.
I’m including a draft of the modified version of the lottery code I’ve been working, just in case it’s helpful to anyone, but it may be a little tangential since my question is more about strategy than particular coding issues. Thank you!
to initial-pair-up
ask winning-link [set real? true]
end
to-report winning-link
let pick random-float sum [propensitypair] of links
let winner nobody
ask not real? links
[if winner=nobody
[ifelse similarity > pick
[set winner self] [set pick pick-similarity] ] ]
report winner
end
For a "lottery" problem, I would normally suggest using the Rnd extension, but I suspect it would not help you here, because you would still need to create a list of all propensity pairs which would still be too big.
So, assuming that you have a propensity reporter (for which I've put a dummy reporter below) here is one way that you could avoid blowing up the memory:
to create-network [ nb-links ]
; get our total without creating links:
let total 0
ask turtles [
ask turtles with [ who > [ who ] of myself ] [
set total total + propensity self myself
]
]
; pre-pick all winning numbers of the lottery:
let picks sort n-values nb-links [ random-float total ]
let running-sum 0
; loop through all possible pairs...
ask turtles [
if empty? picks [ stop ]
ask turtles with [ who > [ who ] of myself ] [
if empty? picks [ stop ]
set running-sum running-sum + propensity self myself
if first picks < running-sum [
; ...and create a link if we have a winning pair
create-link-with myself
set picks but-first picks
]
]
]
end
to-report propensity [ t1 t2 ]
; this is just an example, your own function is probably very different
report
(1 / (1 + abs ([xcor] of t1 - [xcor] of t2))) +
(1 / (1 + abs ([ycor] of t1 - [ycor] of t2)))
end
I have tried it with 10000 turtles:
to setup
clear-all
create-turtles 10000 [
set xcor random-xcor
set ycor random-ycor
]
create-network 1000
end
It takes a while to run, but it doesn't take take much memory.
Maybe I have misunderstood, but I am unclear why you need to have the (potential) link established to run your lottery code. Instead, you could calculate the propensity for a pair of nodes (nodeA and nodeB) and create a link between them if the random-float is lower than the propensity. If you want it proportional to the 'share' of the propensity, calculate the total propensity over all pairs first and then the probability of creating the link is this-pair-propensity / total-propensity.
Then the only issue is running through each pair exactly once. The easiest way is probably an outer loop (nodeA) of all agents and an inner loop of agents (nodeB) with a who number that is greater than the outer loop asker (myself). Something like:
to setup
clear-all
create-turtles 30
[ set xcor random-xcor
set ycor random-ycor
]
ask turtles
[ ask turtles with [ who > [who] of myself ]
[ if random-float 1 < 0.4 [create-link-with myself]
]
]
end

Resources