Nurse scheduling model formulation in AMPL - job-scheduling

I have been working on a Nurse scheduling problem in AMPL for the following conditions:
Total no. of Nurses=20
Total no. of shits= 3 #morning,day,night
Planning Horizon 7 days: let's say M T W R F Sa Su
Along with following constraints:
Max no. of working days in a week: 5
A rest days after 4 continuous
night shifts.
Consecutive night and morning shifts are not allowed.
Demand per shift is 7 nurses.
A nurse can only work in one shift per day, i.e. morning, night, day
Cost scenarios:
Morning shift: $12
Day shift: $13
Night shift : $15
Objective function is to minimize the cost of operation as per Nurse preferences.
Can anyone give me an idea of how this problem can be formulated ?

So at first some things unusual in your problem definition:
This is not a real optimization problem, since your objective function is fixed per definition (every shift has 7 nurses, and every nurse has an equal price per shift)
In your Problem you defined 7 nurses per shift with a maimum of 5 working days. So you need 7 nurses on three shifts on seven days. This equals 147 nurse/shifts. But with the cap of five working days and only one shift per day, you just have 20 Nurses on 5 shifts, which equals to 100 nurse/shifts.
I've built the problem in Mathprog but the code should be more or less equal to AMPL. I've started with three sets for the nurses, days and shifts.
set shifts := {1,2,3};
set days := {1,2,3,4,5,6,7};
set nurses := {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20};
The shedule is defined as a set of binary variables:
var schedule{nurses, days, shifts}, binary;
The simple objective contains the sum of all nurse/shifts in this week with the related prices:
minimize cost: sum{i in nurses, j in days}(schedule[i,j,1]*c_morning+schedule[i,j,2]*c_day+schedule[i,j,3]*c_night);
To your first constraint one can limit the sum of all shifts per nurse to five, since there is only one shift per day possible:
s.t. working_days{n in nurses}:
sum{i in days, j in shifts}(schedule[n,i,j]) <= 5;
The restday is the hardest part of the problem. For simplicity I've created another set which just contains the days, where a nurse could have achived four night-shifts in a row. You can also formulate the constraint with the original set of days and exclude the first four days.
set nigth_days := {5,6,7};
s.t. rest{n in nurses,i in nigth_days}:
(schedule[n,i-4,3]+schedule[n,i-3,3]+schedule[n,i-2,3]+schedule[n,i-1,3]+sum{j in shifts}(schedule[n,i,j])) <= 4;
For not having a morning-shift after a night-shift I used the same attempt like for the rest days. The seventh day is excluded, since there is no eigth day where we can look for a morning-shift.
set yester_days := {1,2,3,4,5,6};
s.t. night_morning{i in yester_days, n in nurses}:
(schedule[n,i,3]+schedule[n,i+1,1]) <= 1;
The demand of four nurses per shift should be met (I've reduced the number since more then 4 nurses are infeasible, due to the 5 shift limit)
s.t. demand_shift{i in days, j in shifts}:
sum{n in nurses}(schedule[n,i,j]) = 4;
The fifth constraint is to limit the shifts per day to a max of one.
s.t. one_shift{n in nurses, i in days}:
sum{ j in shifts}(schedule[n,i,j]) <= 1;

set nurse; #no. of full time employees working in the facility
set days; #planning horizon
set shift; #no. of shift in a day
set S; #shift correseponding to the outsourced nurses
set D;#day corresponding to the outsourced nurses
set N;#
# ith nurse working on day j
# j starts from Monday (j=1), Tuesday( j=2), Wednesday (j=3), Thursday(j=4), Friday(j=5), Saturday(j=6), Sunday(j=7)
#s be the shift as morning, day and night
param availability{i in nurse, j in days};
param costpershift{i in nurse, j in days, s in shift};
param outcost{n in N, l in D, m in S};
var nurseavailability{i in nurse,j in days,s in shift} binary; # = 1 if nurse i is available on jth day working on sth shift, 0 otherwise
var outsourced{n in N, l in D, m in S} integer;
#Objective function
minimize Cost: sum{i in nurse, j in days, s in shift} costpershift[i,j,s]*nurseavailability[i,j,s]+ sum{ n in N, l in D, m in S}outcost[n,l,m]*outsourced[n,l,m];
#constraints
#maximum no. of shifts per day
subject to maximum_shifts_perday {i in nurse,j in days}:
sum{s in shift} nurseavailability[i,j,s]*availability[i,j] <= 1;
#maximum no. of working says a week
subject to maximum_days_of_work {i in nurse}:
sum{j in days,s in shift} availability[i,j]*nurseavailability[i,j,s]<=5; #maximum working days irrespective of shifts
# rest days after night shifts
subject to rest_days_after_night_shift{i in nurse}:
sum{j in days} availability[i,j]*nurseavailability[i,j,3]<=4;
#demand per shift
subject to supply{j in days, s in shift, l in D, m in S}:
sum{i in nurse} availability[i,j]*nurseavailability[i,j,s] + sum{n in N} outsourced[n,l,m]=7;
#outsourcing only works well when there is more variability in supply.
#increasing the staff no. would be effective for reducing the cost variability in demand.
#considering a budget of $16,000 per week
#outsourcing constraints: a maximum of 20 nurses can be outsourced per shift
# no. of fulltime employees=30
#demand is 7 nurses per shift
#the average variability
#all nurses are paid equally # $12 per hour.
#cost of an outsourced shift is $144.
#cost of morning shift is $96.
#cost of day shift is $104.
#cost of night shift is $120.
data;
#set nurse ordered:= nurse1 nurse2 nurse3 nurse4 nurse5 nurse6 nurse7 nurse8
#nurse9 nurse10 nurse11 nurse12 nurse13 nurse14 nurse15 nurse16 nurse17
#nurse18 nurse19 nurse20;
set nurse:= 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30;
#set days ordered:= Monday Tuesday Wednesday Thursday Friday Saturday Sunday;
set days:= 1 2 3 4 5 6 7;
#set shift ordered:= Morning Day Night;
set shift:= 1 2 3;
set D:= 1 2 3 4 5 6 7; #outsourced days
set S:=1 2 3; #outshit
set N := 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20;
param outcost
[*,*,1]:
1 2 3 4 5 6 7:=
1 144 144 144 144 144 144 144
2 144 144 144 144 144 144 144
3 144 144 144 144 144 144 144
4 144 144 144 144 144 144 144
5 144 144 144 144 144 144 144
6 144 144 144 144 144 144 144
7 144 144 144 144 144 144 144
8 144 144 144 144 144 144 144
9 144 144 144 144 144 144 144
10 144 144 144 144 144 144 144
11 144 144 144 144 144 144 144
12 144 144 144 144 144 144 144
13 144 144 144 144 144 144 144
14 144 144 144 144 144 144 144
15 144 144 144 144 144 144 144
16 144 144 144 144 144 144 144
17 144 144 144 144 144 144 144
18 144 144 144 144 144 144 144
19 144 144 144 144 144 144 144
20 144 144 144 144 144 144 144
[*,*,2]:
1 2 3 4 5 6 7:=
1 144 144 144 144 144 144 144
2 144 144 144 144 144 144 144
3 144 144 144 144 144 144 144
4 144 144 144 144 144 144 144
5 144 144 144 144 144 144 144
6 144 144 144 144 144 144 144
7 144 144 144 144 144 144 144
8 144 144 144 144 144 144 144
9 144 144 144 144 144 144 144
10 144 144 144 144 144 144 144
11 144 144 144 144 144 144 144
12 144 144 144 144 144 144 144
13 144 144 144 144 144 144 144
14 144 144 144 144 144 144 144
15 144 144 144 144 144 144 144
16 144 144 144 144 144 144 144
17 144 144 144 144 144 144 144
18 144 144 144 144 144 144 144
19 144 144 144 144 144 144 144
20 144 144 144 144 144 144 144
[*,*,3]:
1 2 3 4 5 6 7:=
1 144 144 144 144 144 144 144
2 144 144 144 144 144 144 144
3 144 144 144 144 144 144 144
4 144 144 144 144 144 144 144
5 144 144 144 144 144 144 144
6 144 144 144 144 144 144 144
7 144 144 144 144 144 144 144
8 144 144 144 144 144 144 144
9 144 144 144 144 144 144 144
10 144 144 144 144 144 144 144
11 144 144 144 144 144 144 144
12 144 144 144 144 144 144 144
13 144 144 144 144 144 144 144
14 144 144 144 144 144 144 144
15 144 144 144 144 144 144 144
16 144 144 144 144 144 144 144
17 144 144 144 144 144 144 144
18 144 144 144 144 144 144 144
19 144 144 144 144 144 144 144
20 144 144 144 144 144 144 144;
param availability:
1 2 3 4 5 6 7 :=
1 0 0 0 0 0 0 0
2 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1
7 1 0 1 1 1 1 1
8 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1
10 1 1 1 1 1 1 1
11 1 1 1 1 1 1 1
12 1 1 1 1 1 1 1
13 1 1 1 1 1 1 1
14 1 1 1 1 1 1 1
15 1 1 1 1 1 1 1
16 1 1 1 1 1 1 1
17 0 1 1 1 1 1 1
18 1 1 1 1 1 1 1
19 1 1 1 1 1 1 1
20 1 1 1 1 1 1 1
21 1 1 1 1 1 1 1
22 1 1 1 1 1 1 1
23 1 1 1 1 1 1 1
24 1 1 1 1 1 1 1
25 1 1 1 1 1 1 1
26 1 1 1 1 1 1 1
27 1 1 1 1 1 1 1
28 1 1 1 1 1 1 1
29 1 1 1 1 1 1 1
30 1 1 1 1 1 1 1;
param costpershift:=
[*,*,1]: 1 2 3 4 5 6 7 :=
1 96 96 96 96 96 96 96
2 96 96 96 96 96 96 96
3 96 96 96 96 96 96 96
4 96 96 96 96 96 96 96
5 96 96 96 96 96 96 96
6 96 96 96 96 96 96 96
7 96 96 96 96 96 96 96
8 96 96 96 96 96 96 96
9 96 96 96 96 96 96 96
10 96 96 96 96 96 96 96
11 96 96 96 96 96 96 96
12 96 96 96 96 96 96 96
13 96 96 96 96 96 96 96
14 96 96 96 96 96 96 96
15 96 96 96 96 96 96 96
16 96 96 96 96 96 96 96
17 96 96 96 96 96 96 96
18 96 96 96 96 96 96 96
19 96 96 96 96 96 96 96
20 96 96 96 96 96 96 96
21 96 96 96 96 96 96 96
22 96 96 96 96 96 96 96
23 96 96 96 96 96 96 96
24 96 96 96 96 96 96 96
25 96 96 96 96 96 96 96
26 96 96 96 96 96 96 96
27 96 96 96 96 96 96 96
28 96 96 96 96 96 96 96
29 96 96 96 96 96 96 96
30 96 96 96 96 96 96 96
[*,*,2] : 1 2 3 4 5 6 7 :=
1 104 104 104 104 104 104 104
2 104 104 104 104 104 104 104
3 104 104 104 104 104 104 104
4 104 104 104 104 104 104 104
5 104 104 104 104 104 104 104
6 104 104 104 104 104 104 104
7 104 104 104 104 104 104 104
8 104 104 104 104 104 104 104
9 104 104 104 104 104 104 104
10 104 104 104 104 104 104 104
11 104 104 104 104 104 104 104
12 104 104 104 104 104 104 104
13 104 104 104 104 104 104 104
14 104 104 104 104 104 104 104
15 104 104 104 104 104 104 104
16 104 104 104 104 104 104 104
17 104 104 104 104 104 104 104
18 104 104 104 104 104 104 104
19 104 104 104 104 104 104 104
20 104 104 104 104 104 104 104
21 104 104 104 104 104 104 104
22 104 104 104 104 104 104 104
23 104 104 104 104 104 104 104
24 104 104 104 104 104 104 104
25 104 104 104 104 104 104 104
26 104 104 104 104 104 104 104
27 104 104 104 104 104 104 104
28 104 104 104 104 104 104 104
29 104 104 104 104 104 104 104
30 104 104 104 104 104 104 104
[*,*,3] : 1 2 3 4 5 6 7 :=
1 120 120 120 120 120 120 120
2 120 120 120 120 120 120 120
3 120 120 120 120 120 120 120
4 120 120 120 120 120 120 120
5 120 120 120 120 120 120 120
6 120 120 120 120 120 120 120
7 120 120 120 120 120 120 120
8 120 120 120 120 120 120 120
9 120 120 120 120 120 120 120
10 120 120 120 120 120 120 120
11 120 120 120 120 120 120 120
12 120 120 120 120 120 120 120
13 120 120 120 120 120 120 120
14 120 120 120 120 120 120 120
15 120 120 120 120 120 120 120
16 120 120 120 120 120 120 120
17 120 120 120 120 120 120 120
18 120 120 120 120 120 120 120
19 120 120 120 120 120 120 120
20 120 120 120 120 120 120 120
21 120 120 120 120 120 120 120
22 120 120 120 120 120 120 120
23 120 120 120 120 120 120 120
24 120 120 120 120 120 120 120
25 120 120 120 120 120 120 120
26 120 120 120 120 120 120 120
27 120 120 120 120 120 120 120
28 120 120 120 120 120 120 120
29 120 120 120 120 120 120 120
30 120 120 120 120 120 120 120;

Related

When Cloudwatch Logs data is sent into kinesis data stream, what is its encoding format

I'm trying to write a Go program, to download data from aws kinesis data stream. I read that kinesis data stream encode the data with base64, so I need first decode with base64. However, I can't figure out what encoding was used on the data as it is passed, from cloudwatch logs to kinesis data stream.
I'm trying the different decoding method but none works. My unprocessed byte array downloaded from kinesis data stream is as the following:
[31 139 8 0 0 0 0 0 0 0 53 206 65 11 130 64 16 134 225 191 178 204 89 130 178 34 246 22 97 30 178 130 12 58 68 196 166 147 14 233 174 236 140 69 68 255 61 204 58 190 204 7 243 188 160 70 102 83 224 254 217 32 104 88 108 55 251 221 54 57 175 163 52 157 199 17 4 224 30 22 125 119 169 92 155 63 140 100 101 226 10 134 0 42 87 196 222 181 13 104 232 43 21 143 166 238 147 219 11 103 158 26 33 103 151 84 9 122 6 125 60 125 119 209 29 173 116 249 2 202 251 185 80 141 44 166 110 64 15 167 227 201 48 28 79 166 225 108 20 6 127 94 7 56 36 234 199 83 63 158 86 139 18 179 27 217 66 149 104 42 41 149 187 170 28 89 200 154 238 179 90 145 69 38 86 252 165 13 224 125 122 127 0 234 141 66 79 242 0 0 0]
Can someone give me some tips how to process this piece of data?
You can use a subscription filter with Kinesis, Lambda, or Kinesis Data Firehose. Logs that are sent to a receiving service through a subscription filter are base64 encoded and compressed with the gzip format.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html

Get frame / pattern of an image without loop MATLAB

I would like to extract certain part of an image. Let's say, only those parts that are indexed by ones, in some kind of template or frame.
GRAYPIC = reshape(randperm(169), 13, 13);
FRAME = ones(13);
FRAME(5:9, 5:9) = 0;
FRAME_OF_GRAYPIC = []; % the new pic that only shows the frame extracted
I can achieve this using a for loop:
for X = 1:13
for Y = 1:13
vlaue = FRAME(Y, X);
switch vlaue
case 1
FRAME_OF_GRAYPIC(X,Y) = GRAYPIC(X,Y)
case 0
FRAME_OF_GRAYPIC(X,Y) = 0
end
end
end
imshow(mat2gray(FRAME_OF_GRAYPIC));
However, is it possible to use it with some kind of vector operation, i.e.:
FRAME_OF_GRAYPIC = GRAYPIC(FRAME==1);
Though, this doesn't work unfortunately.
Any suggestions?
Thanks a lot for your answers,
best,
Clemens
Too long for a comment...
GRAYPIC = reshape(randperm(169), 13, 13);
FRAME = zeros(13);
FRAME(5:9, 5:9) = 0;
FRAME_OF_GRAYPIC = zeros(size(GRAYPIC); % MUST preallocate new pic the right size
FRAME = logical(FRAME); % ... FRAME = (FRAME == 1)
FRAME_OF_GRAYPIC(FRAME) = GRAYPIC(FRAME);
Three things to note here:
FRAME must be a logical array. Create it with true()/false(), or cast it using logical(), or select a value to be true using FRAME = (FRAME == true_value);
You must preallocate your final image to the proper dimensions, otherwise it will turn into a vector.
You need the image indices on both sides of the assignment:
FRAME_OF_GRAYPIC(FRAME) = GRAYPIC(FRAME);
Output:
FRAME_OF_GRAYPIC =
38 64 107 63 27 132 148 160 88 59 102 69 81
14 108 76 58 49 55 51 19 158 52 100 153 39
79 139 12 115 147 154 96 112 82 73 159 146 93
169 2 71 25 33 149 138 150 129 117 65 97 17
43 111 37 142 0 0 0 0 0 128 84 86 22
9 137 127 45 0 0 0 0 0 68 28 46 163
42 11 31 29 0 0 0 0 0 152 3 85 36
50 110 165 18 0 0 0 0 0 144 143 44 109
114 133 1 122 0 0 0 0 0 80 167 157 145
24 116 60 130 53 77 156 35 6 78 90 30 140
74 120 40 26 106 166 121 34 98 57 56 13 48
8 155 4 16 124 75 123 23 105 66 7 141 70
89 113 99 101 54 20 94 72 83 168 61 5 10

Is there a way to understand what Oracle DataDump util updates in dmp file after extract?

I do not want to wait for Oracle DataDump expdb to finish writing to dump file.
So I start reading data from the moment it's created.
Then I write this data to another file.
It worked ok - file sizes are the same (the one that OracleDump created and the one my data monitoring script created).
But when I run cmp it shows difference in 27 bytes:
cmp -l ora.dmp monitor_10k_rows.dmp
3 263 154
4 201 131
5 174 173
6 103 75
48 64 70
58 0 340
64 0 1
65 0 104
66 0 110
541 60 61
545 60 61
552 60 61
559 60 61
20508 0 15
20509 0 157
20510 0 230
20526 0 10
20532 0 15
20533 0 225
20534 0 150
913437 0 226
913438 0 37
913454 0 10
913460 0 1
913461 0 104
913462 0 100
ls -al ora.dmp
-rw-r--r-- 1 oracle oinstall 999424 Jun 20 11:35 ora.dmp
python -c 'print 999424-913462'
85962
od ora.dmp -j 913461 -N 1
3370065 000100
3370066
od monitor_10k_rows.dmp -j 913461 -N 1
3370065 000000
3370066
Even if I extract more data the difference is still 27 bytes but different addresses/values:
cmp -l ora.dmp monitor_30k_rows.dmp
3 245 134
4 222 264
5 377 376
6 54 45
48 36 43
57 0 2
58 0 216
64 0 1
65 0 104
66 0 120
541 60 61
545 60 61
552 60 61
559 60 61
20508 0 50
20509 0 126
20510 0 173
20526 0 10
20532 0 50
20533 0 174
20534 0 120
2674717 0 226
2674718 0 47
2674734 0 10
2674740 0 1
2674741 0 104
2674742 0 110
Some writes are the same.
Is there a way know addresses of bytes which will differ?
ls -al ora.dmp
-rw-r--r-- 1 bicadmin bic 2760704 Jun 20 11:09 ora.dmp
python -c 'print 2760704-2674742'
85962
How can update my monitored copy after DataDump updated the original at adress 2674742 using Python for example?
Exact same thing happens if I use COMPRESSION=DATA_ONLY option.
Update: Figured how to sync bytes that differ between 2 files:
def patch_file(fn, diff):
for line in diff.split(os.linesep):
if line:
addr, to_octal, _ = line.strip().split()
with open(fn , 'r+b') as f:
f.seek(int(addr)-1)
f.write(chr(int (to_octal,8)))
diff="""
3 157 266
4 232 276
5 272 273
6 16 25
48 64 57
58 340 0
64 1 0
65 104 0
66 110 0
541 61 60
545 61 60
552 61 60
559 61 60
20508 15 0
20509 157 0
20510 230 0
20526 10 0
20532 15 0
20533 225 0
20534 150 0
913437 226 0
913438 37 0
913454 10 0
913460 1 0
913461 104 0
913462 100 0
"""
patch_file(f3,diff)
wrote a patch using Python:
addr=[3 , 4 , 5 , 6 , 48 , 58 , 64 , 65 , 66 , 541 , 545 , 552 , 559 , 20508 , 20509 , 20510 , 20526 , 20532 , 20533 , 20534 ]
last_range=[85987, 85986, 85970, 85964, 85963, 85962]
def get_bytes(addr):
out =[]
with open(f1 , 'r+b') as f:
for a in addr:
f.seek(a-1)
data= f.read(1)
hex= binascii.hexlify(data)
binary = int(hex, 16)
octa= oct(binary)
out.append((a,octa))
return out
def patch_file(fn, bytes_to_update):
with open(fn , 'r+b') as f:
for (a,to_octal) in bytes_to_update:
print (a,to_octal)
f.seek(int(a)-1)
f.write(chr(int (to_octal,8)))
if 1:
from_file=f1
fsize=os.stat(from_file).st_size
bytes_to_read = addr + [fsize-x for x in last_range]
bytes_to_update = get_bytes(bytes_to_read)
to_file =f3
patch_file(to_file,bytes_to_update)
The reason I do dmp file monitoring is because it cuts backup time in half.

R compare all list elements for duplicates

I am looking at all possible paths through a graph. I have written a DFS algorithm that finds all these paths. I want to make sure that my algorithm works correctly and that no two paths are identical. My algorithm returns a list that looks as follows:
....
[[2770]]
[1] 1 2 3 52 53 54 55 56 57 58 59 60 12 11 10 9 8 78 79 80 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129
[38] 130 131 132 133 134 137 138 139 140 141 142 143 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166
[[2771]]
[1] 1 2 3 52 53 54 55 56 57 58 59 60 12 11 10 9 8 78 79 80 113 114 115 143 144 145 146 147 148 149 150 151 152 153 154 155 156
[38] 157 158 159 160 161 162 163 164 165 166
[[2772]]
[1] 1 2 3 52 53 54 55 56 57 58 59 60 12 11 10 9 8 78 79 80 113 114 115 143 150 151 152 153 154 155 156 157 158 159 160 161 162
[38] 163 164 165 166
As you can see, the list is 2772 elements long. This means there are 2,772 paths through this graph. How can I easily compare all the list elements to make sure there are no duplicates. Just to be clear, the same set of numbers but in a different ordering represents a different path and is not a duplicate!
Thank you for your help!
maybe something like
test<-list(1:2,3:4,5:7,1:10,3:4,4:3)
dups<-duplicated(test)
idups<-seq_along(test)[dups]

data.frame to spatial polygone data frame

I have this data.frame
data <- read.table(text="Id x y valecolo valecono
1 1 12.18255221 29.406365240 4 990
2 2 9.05893970 20.923087170 4 1090
3 3 1.11192442 2.460411416 0 420
4 4 15.51290096 27.185287490 16 1320
5 5 20.41913438 32.166268590 13 1050
6 6 12.75939095 17.552435030 60 1010
7 7 28.06853355 30.839057830 12 1030
8 8 6.96288868 7.177616682 33 1010
9 9 30.60527190 20.792242110 23 640
10 10 12.07646283 7.658266843 19 810
11 11 10.42878294 5.520913954 0 700
12 12 23.61674977 11.111217320 0 838
13 13 27.16148898 12.259423750 11 1330
14 14 28.00931750 6.258448426 20 777
15 15 20.79999922 -0.000877298 4 630
16 16 21.59999968 -0.005502197 38 830
17 17 19.46122172 -1.229166015 7 740
18 18 28.20370719 -6.305622777 12 660
19 19 29.94840042 -7.192584050 0 1030
20 20 29.28601258 -12.133404940 10 870
21 21 5.88104817 -3.608777319 0 1050
22 22 30.37845976 -26.784308510 0 900
23 23 13.68270042 -12.451253320 0 300
24 24 26.01871530 -26.024342420 22 1330
25 25 20.17735764 -20.829648070 21 1190
26 26 5.04404016 -5.550464740 7 1030
27 27 17.98312114 -26.468988540 0 1200
28 28 8.50660753 -12.957145840 9 850
29 29 10.79633248 -18.938827100 36 1200
30 30 13.36599497 -28.413203870 7 1240
31 31 10.77987946 -28.531459810 0 350
32 32 8.35194396 -24.410755680 28 910
33 33 1.55014408 -12.302725060 10 980
34 34 -0.00388992 -17.899999200 12 1120
35 35 -2.82062504 -16.155620130 12 450
36 36 -4.75903628 -22.962014490 20 920
37 37 -6.07839546 -15.339592840 28 840
38 38 -11.32647798 -24.068047630 0 665
39 39 -11.88138209 -24.245262620 12 1180
40 40 -14.06823800 -25.587589260 36 350
41 41 -10.92180227 -18.461223360 7 1180
42 42 -12.48843186 -20.377660600 0 400
43 43 -18.63696964 -27.415068190 18 1220
44 44 -16.73351789 -23.807549250 0 500
45 45 -22.49024869 -29.944803740 7 1040
46 46 -22.66130064 -27.391018580 0 500
47 47 -15.26565038 -17.866446720 16 1060
48 48 -24.20192852 -23.451155780 0 600
49 49 -21.39663774 -20.089958090 0 750
50 50 -12.33344998 -9.875526199 16 980
51 51 -30.94772590 -22.478895910 0 790
52 52 -24.85783868 -15.225318840 25 720
53 53 -2.44485324 -1.145728097 54 970
54 54 -24.67985433 -7.169018707 4 500
55 55 -30.82457650 -7.398346555 4 750
56 56 -23.56898920 -5.265475270 4 760
57 57 -3.91708603 -0.810208045 0 350
58 58 -26.86563675 -4.251776497 0 440
59 59 -26.64738877 -1.675324623 8 450
60 60 -8.79897138 -0.134558536 11 830
61 61 -21.78250663 1.716077388 0 920
62 62 -28.98396759 6.007465815 24 980
63 63 -34.61607994 8.311853049 8 500
64 64 -25.63850107 7.453677191 15 880
65 65 -22.98762116 11.266290120 11 830
66 66 -33.48522130 19.100848030 0 350
67 67 -25.53096486 16.777135830 21 740
68 68 -18.95412327 15.681238150 0 300
69 69 -8.94874230 8.144324435 0 500
70 70 -10.91433241 10.579099310 4 750
71 71 -13.44807236 14.327310800 0 1090
72 72 -16.24086139 20.940019610 0 500
73 73 -17.51162097 24.111886810 0 940
74 74 -12.47496424 18.363422910 0 1020
75 75 -17.76118016 27.990410510 0 660
76 76 -5.54534556 9.730834410 0 850
77 77 -11.30971858 29.934766840 0 950
78 78 -10.38743785 27.493148220 0 740
79 79 -8.61491396 25.166312360 0 950
80 80 -3.40550077 14.197273530 0 710
81 81 -0.77957621 3.770246702 0 750
82 82 -3.01234325 21.186924550 0 1200
83 83 -2.05241931 32.685624900 0 1200
84 84 -2.26900366 36.128820600 0 970
85 85 0.82954518 5.790885396 0 850
86 86 22.08151130 19.671119440 19 870
87 87 12.60107972 23.864904860 0 1260
88 88 9.78406607 26.163968270 0 600
89 89 11.69995152 33.091322170 0 1090
90 90 20.64705880 -16.439632140 0 840
91 91 24.68314851 -21.314655730 0 1561
92 92 30.33133300 -27.235396100 0 1117
93 93 -26.24691654 -22.405635470 0 1040
94 94 -21.68016500 -24.458519270 10 1000
95 95 -1.57455856 -30.874986140 0 500
96 96 -29.75642086 -5.610894981 0 350
97 97 -3.66771076 26.448084810 0 900
98 98 -26.54457307 29.824419350 0 1050
99 99 -17.90426678 18.751297440 0 200
100 100 10.22894253 -6.274450952 0 880")
And I would like to create a visualization with the polygons of thiessen, then colorize the polygons according to their "valecono" value.
I tried this:
> library(deldir)
> z <- deldir(x,y,rw=c(-34.51608,30.7052719,-30.774986,36.2288206))
> w <- tile.list(z)
> plot(w, fillcol=data$valecono, close=TRUE)
Which seems weird to me, and I'm not sure how R attributed these colors.
Do you have any other suggestions for this case?
I also tried to convert my data.frame in SpatialPolygonsDataFrame, what I did not manage. I tried to convert my data.frame into SpatialPointsDataFrame, which was not a problem, but was not very useful, because I did not find how to convert it then to a SpatialPointsDataFrame.
spdf <- SpatialPointsDataFrame(coords = coords, data = data,
proj4string = CRS("+proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0"))
I try all this because I think that with a SpatialPointsDataFrame, it would be easier to have this visualization of polygons with colors according to the valecono of the points.
You can do
library(dismo)
coordinates(data) <- ~x + y
v <- voronoi(data)
spplot(v, "valecolo")
With base plot
s <- (floor(sort(v$valecono)/400) + 1)
plot(v, col=rainbow(60)[v$valecolo+1])
points(data, cex=s/2, col=gray((1:4)/4)[s])

Resources