I would like to extract certain part of an image. Let's say, only those parts that are indexed by ones, in some kind of template or frame.
GRAYPIC = reshape(randperm(169), 13, 13);
FRAME = ones(13);
FRAME(5:9, 5:9) = 0;
FRAME_OF_GRAYPIC = []; % the new pic that only shows the frame extracted
I can achieve this using a for loop:
for X = 1:13
for Y = 1:13
vlaue = FRAME(Y, X);
switch vlaue
case 1
FRAME_OF_GRAYPIC(X,Y) = GRAYPIC(X,Y)
case 0
FRAME_OF_GRAYPIC(X,Y) = 0
end
end
end
imshow(mat2gray(FRAME_OF_GRAYPIC));
However, is it possible to use it with some kind of vector operation, i.e.:
FRAME_OF_GRAYPIC = GRAYPIC(FRAME==1);
Though, this doesn't work unfortunately.
Any suggestions?
Thanks a lot for your answers,
best,
Clemens
Too long for a comment...
GRAYPIC = reshape(randperm(169), 13, 13);
FRAME = zeros(13);
FRAME(5:9, 5:9) = 0;
FRAME_OF_GRAYPIC = zeros(size(GRAYPIC); % MUST preallocate new pic the right size
FRAME = logical(FRAME); % ... FRAME = (FRAME == 1)
FRAME_OF_GRAYPIC(FRAME) = GRAYPIC(FRAME);
Three things to note here:
FRAME must be a logical array. Create it with true()/false(), or cast it using logical(), or select a value to be true using FRAME = (FRAME == true_value);
You must preallocate your final image to the proper dimensions, otherwise it will turn into a vector.
You need the image indices on both sides of the assignment:
FRAME_OF_GRAYPIC(FRAME) = GRAYPIC(FRAME);
Output:
FRAME_OF_GRAYPIC =
38 64 107 63 27 132 148 160 88 59 102 69 81
14 108 76 58 49 55 51 19 158 52 100 153 39
79 139 12 115 147 154 96 112 82 73 159 146 93
169 2 71 25 33 149 138 150 129 117 65 97 17
43 111 37 142 0 0 0 0 0 128 84 86 22
9 137 127 45 0 0 0 0 0 68 28 46 163
42 11 31 29 0 0 0 0 0 152 3 85 36
50 110 165 18 0 0 0 0 0 144 143 44 109
114 133 1 122 0 0 0 0 0 80 167 157 145
24 116 60 130 53 77 156 35 6 78 90 30 140
74 120 40 26 106 166 121 34 98 57 56 13 48
8 155 4 16 124 75 123 23 105 66 7 141 70
89 113 99 101 54 20 94 72 83 168 61 5 10
Related
Iam trying to create low pass filter in matlab
I've a matrix with size 12x6
56 147 56 52 147 52;
50 146 46 56 141 53;
59 142 145 147 147 46;
60 147 147 145 145 47;
52 145 35 47 146 52;
54 142 46 50 145 45;
56 52 56 52 56 52;
54 142 146 147 145 45;
59 142 53 45 147 46;
60 147 53 45 145 47;
52 145 124 145 142 52;
35 146 50 51 53 141;
with mask 1/25 (those matrix values are one)
first i checked the matrix to image
M = [
56 147 56 52 147 52;
50 146 46 56 141 53;
59 142 145 147 147 46;
60 147 147 145 145 47;
52 145 35 47 146 52;
54 142 46 50 145 45;
56 52 56 52 56 52;
54 142 146 147 145 45;
59 142 53 45 147 46;
60 147 53 45 145 47;
52 145 124 145 142 52;
35 146 50 51 53 141;
];
f=uint8(M);
figure;imshow(f);title('Image from matrix');
image looks very small cause matrix size also small.
then I did this iteration with this following code :
a=f;
b = size(M); % check size my matrix
n=5;
n1=ceil(n/2);
lpf=(1/n^2)*ones(n) % multiply with ones values of matrix;
c=0;
h=0;
for i=n1:b(1)-n1
for j=n1:b(2)-n1
p=1;
for k=1:n
for l=1:n
c=c+a(i-n1+k,j-n1+l)*lpf(l,l);
end
end
d(i,j)=c;
c=0;
end
end
e=uint8(d);
figure;imshow(e);title('low pass image');
the result stored in e variable , when i check it my matrix become more small than before, the size become 9x3 :
0 0 0
0 0 0
0 0 105
0 0 105
0 0 97
0 0 97
0 0 89
0 0 90
0 0 97
Am i missing something ?
You are applying the filter to the region of your image where all values are set. That means you're cropping two columns on the left, two columns on the right, two rows on the top and two rows on the bottom. There are two bugs in your code
You're cropping one row and one column more than necessary.
You're implicitly padding the result with zeros on the left and on the top.
Additionally, you should preallocate your matrices.
Here is a fix
clear;
M = [
56 147 56 52 147 52;
50 146 46 56 141 53;
59 142 145 147 147 46;
60 147 147 145 145 47;
52 145 35 47 146 52;
54 142 46 50 145 45;
56 52 56 52 56 52;
54 142 146 147 145 45;
59 142 53 45 147 46;
60 147 53 45 145 47;
52 145 124 145 142 52;
35 146 50 51 53 141;
];
f = uint8(M);
a=f;
b = size(M);
n=5;
n1=ceil(n/2);
lpf=(1/n^2)*ones(n);
c=0;
h=0;
d = zeros(b(1) - 2 * n1 + 2, b(2) - 2 * n1 + 2);
for i=n1:b(1)-n1+1
for j=n1:b(2)-n1+1
for k=1:n
for l=1:n
c=c+a(i-n1+k,j-n1+l)*lpf(l,l);
end
end
d(i - n1 + 1, j - n1 + 1)=c;
c=0;
end
end
e=uint8(d);
If you want the result to keep the size of the original image you have to choose a strategy for the edges. Common strategies are zero-padding, padding with the last value or circular padding but the strategy strongly depends on the use-case.
Here you can find more details.
I'm trying to use a variable range with ruby, but my code does not work;
ruby -e ' input2=145..170 ; input3= input2.to_s.gsub(/(.*?)\.\.(.*?)/) { 5.upto($2.to_i) { |i| print i, " " } }; print input3' > zzmf
But I obtained 5170
This part fails:
5.upto($2.to_i) { |i| print i, " " }
I expected:
5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 5170
I don't think gsub is what you need, try the match example below. [2] gets the second match from the regex /(\d+)..(\d+)/ applied to "147..170"
5.upto("147..170".match(/(\d+)\.\.(\d+)/)[2].to_i) { |i| print i, " "}
gsub is intended for string substitution.
https://ruby-doc.org/core-2.1.4/String.html#method-i-gsub
I see my code and I confuse in regular expression
I use this .*?
and the correct is this .*
(.*)/
ruby -e ' input2=145..170 ; input3= input2.to_s.gsub(/(.*?)\.\.(.*)/) { 5.upto($2.to_i) { |i| print i, " " } }; print input3' > zzmf
thanks for your responses
I do not want to wait for Oracle DataDump expdb to finish writing to dump file.
So I start reading data from the moment it's created.
Then I write this data to another file.
It worked ok - file sizes are the same (the one that OracleDump created and the one my data monitoring script created).
But when I run cmp it shows difference in 27 bytes:
cmp -l ora.dmp monitor_10k_rows.dmp
3 263 154
4 201 131
5 174 173
6 103 75
48 64 70
58 0 340
64 0 1
65 0 104
66 0 110
541 60 61
545 60 61
552 60 61
559 60 61
20508 0 15
20509 0 157
20510 0 230
20526 0 10
20532 0 15
20533 0 225
20534 0 150
913437 0 226
913438 0 37
913454 0 10
913460 0 1
913461 0 104
913462 0 100
ls -al ora.dmp
-rw-r--r-- 1 oracle oinstall 999424 Jun 20 11:35 ora.dmp
python -c 'print 999424-913462'
85962
od ora.dmp -j 913461 -N 1
3370065 000100
3370066
od monitor_10k_rows.dmp -j 913461 -N 1
3370065 000000
3370066
Even if I extract more data the difference is still 27 bytes but different addresses/values:
cmp -l ora.dmp monitor_30k_rows.dmp
3 245 134
4 222 264
5 377 376
6 54 45
48 36 43
57 0 2
58 0 216
64 0 1
65 0 104
66 0 120
541 60 61
545 60 61
552 60 61
559 60 61
20508 0 50
20509 0 126
20510 0 173
20526 0 10
20532 0 50
20533 0 174
20534 0 120
2674717 0 226
2674718 0 47
2674734 0 10
2674740 0 1
2674741 0 104
2674742 0 110
Some writes are the same.
Is there a way know addresses of bytes which will differ?
ls -al ora.dmp
-rw-r--r-- 1 bicadmin bic 2760704 Jun 20 11:09 ora.dmp
python -c 'print 2760704-2674742'
85962
How can update my monitored copy after DataDump updated the original at adress 2674742 using Python for example?
Exact same thing happens if I use COMPRESSION=DATA_ONLY option.
Update: Figured how to sync bytes that differ between 2 files:
def patch_file(fn, diff):
for line in diff.split(os.linesep):
if line:
addr, to_octal, _ = line.strip().split()
with open(fn , 'r+b') as f:
f.seek(int(addr)-1)
f.write(chr(int (to_octal,8)))
diff="""
3 157 266
4 232 276
5 272 273
6 16 25
48 64 57
58 340 0
64 1 0
65 104 0
66 110 0
541 61 60
545 61 60
552 61 60
559 61 60
20508 15 0
20509 157 0
20510 230 0
20526 10 0
20532 15 0
20533 225 0
20534 150 0
913437 226 0
913438 37 0
913454 10 0
913460 1 0
913461 104 0
913462 100 0
"""
patch_file(f3,diff)
wrote a patch using Python:
addr=[3 , 4 , 5 , 6 , 48 , 58 , 64 , 65 , 66 , 541 , 545 , 552 , 559 , 20508 , 20509 , 20510 , 20526 , 20532 , 20533 , 20534 ]
last_range=[85987, 85986, 85970, 85964, 85963, 85962]
def get_bytes(addr):
out =[]
with open(f1 , 'r+b') as f:
for a in addr:
f.seek(a-1)
data= f.read(1)
hex= binascii.hexlify(data)
binary = int(hex, 16)
octa= oct(binary)
out.append((a,octa))
return out
def patch_file(fn, bytes_to_update):
with open(fn , 'r+b') as f:
for (a,to_octal) in bytes_to_update:
print (a,to_octal)
f.seek(int(a)-1)
f.write(chr(int (to_octal,8)))
if 1:
from_file=f1
fsize=os.stat(from_file).st_size
bytes_to_read = addr + [fsize-x for x in last_range]
bytes_to_update = get_bytes(bytes_to_read)
to_file =f3
patch_file(to_file,bytes_to_update)
The reason I do dmp file monitoring is because it cuts backup time in half.
I am using Vowpal Wabbit to classify multi class images. My data set is similar to http://www.cs.toronto.edu/~kriz/cifar.html , consisting of 3000 training samples and 500 testing samples. The features are RGB values of 32*32 images. I used Vowpal Wabbit Logistic loss function to train the model with 100 iterations. During the training process the average loss is below 0.02 (I assume this number is pretty good right?). Then I predict the labels of the training set with the output model, and fount that the predictions are very bad. Nearly all of them are of category six. I really don't know what happened, because it seems to me that during training process the predictions mostly correct, but after I predict with the model they suddenly become all 6.
Here is a sample line of feature.
1 | 211 174 171 165 161 161 162 163 163 163 163 163 163 163 163 163
162 161 162 163 163 163 163 164 165 167 168 167 168 163 160 187 153
102 96 90 89 90 91 92 92 92 92 92 92 92 92 92 92 92 91 90 90 90 90 91
92 94 95 96 99 97 98 127 111 71 71 64 66 68 69 69 69 69 69 69 70 70 69
69 70 71 71 69 68 68 68 68 70 72 73 75 78 78 81 96 111 69 68 61 64 67
67 67 67 67 67 67 68 67 67 66 67 68 69 68 68 67 66 66 67 69 69 69 71
70 77 89 116 74 76 71 72 74 74 72 73 74 74 74 74 74 74 74 72 72 74 76
76 75 74 74 74 73 73 72 73 74 85 92 123 83 86 83 82 83 83 82 83 83 82
82 82 82 82 82 81 80 82 85 85 84 83 83 83 85 85 85 85 86 94 95 127 92
96 93 93 92 91 91 91 91 91 90 89 89 86 86 86 86 87 89 89 88 88 88 92
92 93 98 100 96 98 96 132 99 101 98 98 97 95 93 93 94 93 93 95 96 97
95 96 96 96 96 95 94 100 103 98 93 95 100 105 103 103 96 139 106 108
105 102 100 98 98 98 99 99 100 100 95 98 93 81 78 79 77 76 76 79 98
107 102 97 98 103 107 108 99 145 115 118 115 115 115 113 ......
Here is my training script:
./vw train.vw --oaa 6 --passes 100 --loss_function logistic -c
--holdout_off -f image_classification.model
Here is my predicting script (on the training data set):
./vw -i image_classification.model -t train.vw -p train.predict --quiet
Here is the statistics during training:
final_regressor = image_classification.model Num weight bits = 18
learning rate = 0.5 initial_t = 0 power_t = 0.5 decay_learning_rate =
1 using cache_file = train.vw.cache ignoring text input in favor of
cache input num sources = 1 average since example
example current current current loss last counter
weight label predict features
0.000000 0.000000 1 1.0 1 1 3073
0.000000 0.000000 2 2.0 1 1 3073
0.000000 0.000000 4 4.0 1 1 3073
0.000000 0.000000 8 8.0 1 1 3073
0.000000 0.000000 16 16.0 1 1 3073
0.000000 0.000000 32 32.0 1 1 3073
0.000000 0.000000 64 64.0 1 1 3073
0.000000 0.000000 128 128.0 1 1 3073
0.000000 0.000000 256 256.0 1 1 3073
0.001953 0.003906 512 512.0 2 2 3073
0.002930 0.003906 1024 1024.0 3 3 3073
0.002930 0.002930 2048 2048.0 5 5 3073
0.006836 0.010742 4096 4096.0 3 3 3073
0.012573 0.018311 8192 8192.0 5 5 3073
0.014465 0.016357 16384 16384.0 3 3 3073
0.017029 0.019592 32768 32768.0 6 6 3073
0.017731 0.018433 65536 65536.0 6 6 3073
0.017891 0.018051 131072 131072.0 5 5 3073
0.017975 0.018059 262144 262144.0 3 3 3073
finished run number of examples per pass = 3000 passes used = 100
weighted example sum = 300000.000000 weighted label sum = 0.000000
average loss = 0.017887 total feature number = 921900000
It seems to me that it predicts perfectly during training but after I use the outputed model suddenly everything becomes of category 6. I really have no idea what has gone wrong.
There are several problems in your approach.
1) I guess the training set contains first all images with label 1, then all examples with label 2 and so on, the last label is 6. You need to shuffle such training data if you want to use online learning (which is the default learning algorithm in VW).
2) VW uses sparse feature format. The order of features on one line is not important (unless you use --ngram). So if feature number 1 (red channel of the top left pixel) has value 211 and feature number 2 (red channel of the second pixel) has value 174, you need to use:
1 | 1:211 2:147 ...
3) To get good results in image recognition you need something better than a linear model on the raw pixel values. Unfortunately, VW has no deep learning (multi-layer neural net), no convolutional nets. You can try --nn X to get neural net with one hidden layer with X units (and tanh activation function), but this is just a poor substitute for the state-of-the-art approaches to CIFAR etc. You can also try other non-linear reductions available in VW (-q, --cubic, --lrq, --ksvm, --stage_poly). In general, I think VW is not suitable for such tasks (image recognition), unless you apply some preprocessing which generates (a lot of) features (e.g. SIFT).
4) You are overfitting.
average loss is below 0.02 (I assume this number is pretty good right?
No. You used --holdout_off, so the reported loss is rather the train loss. It is easy to get almost zero train loss by simple memoizing all examples, i.e. overfitting. However, you want to get the test (or holdout) loss low.
I have this data.frame
data <- read.table(text="Id x y valecolo valecono
1 1 12.18255221 29.406365240 4 990
2 2 9.05893970 20.923087170 4 1090
3 3 1.11192442 2.460411416 0 420
4 4 15.51290096 27.185287490 16 1320
5 5 20.41913438 32.166268590 13 1050
6 6 12.75939095 17.552435030 60 1010
7 7 28.06853355 30.839057830 12 1030
8 8 6.96288868 7.177616682 33 1010
9 9 30.60527190 20.792242110 23 640
10 10 12.07646283 7.658266843 19 810
11 11 10.42878294 5.520913954 0 700
12 12 23.61674977 11.111217320 0 838
13 13 27.16148898 12.259423750 11 1330
14 14 28.00931750 6.258448426 20 777
15 15 20.79999922 -0.000877298 4 630
16 16 21.59999968 -0.005502197 38 830
17 17 19.46122172 -1.229166015 7 740
18 18 28.20370719 -6.305622777 12 660
19 19 29.94840042 -7.192584050 0 1030
20 20 29.28601258 -12.133404940 10 870
21 21 5.88104817 -3.608777319 0 1050
22 22 30.37845976 -26.784308510 0 900
23 23 13.68270042 -12.451253320 0 300
24 24 26.01871530 -26.024342420 22 1330
25 25 20.17735764 -20.829648070 21 1190
26 26 5.04404016 -5.550464740 7 1030
27 27 17.98312114 -26.468988540 0 1200
28 28 8.50660753 -12.957145840 9 850
29 29 10.79633248 -18.938827100 36 1200
30 30 13.36599497 -28.413203870 7 1240
31 31 10.77987946 -28.531459810 0 350
32 32 8.35194396 -24.410755680 28 910
33 33 1.55014408 -12.302725060 10 980
34 34 -0.00388992 -17.899999200 12 1120
35 35 -2.82062504 -16.155620130 12 450
36 36 -4.75903628 -22.962014490 20 920
37 37 -6.07839546 -15.339592840 28 840
38 38 -11.32647798 -24.068047630 0 665
39 39 -11.88138209 -24.245262620 12 1180
40 40 -14.06823800 -25.587589260 36 350
41 41 -10.92180227 -18.461223360 7 1180
42 42 -12.48843186 -20.377660600 0 400
43 43 -18.63696964 -27.415068190 18 1220
44 44 -16.73351789 -23.807549250 0 500
45 45 -22.49024869 -29.944803740 7 1040
46 46 -22.66130064 -27.391018580 0 500
47 47 -15.26565038 -17.866446720 16 1060
48 48 -24.20192852 -23.451155780 0 600
49 49 -21.39663774 -20.089958090 0 750
50 50 -12.33344998 -9.875526199 16 980
51 51 -30.94772590 -22.478895910 0 790
52 52 -24.85783868 -15.225318840 25 720
53 53 -2.44485324 -1.145728097 54 970
54 54 -24.67985433 -7.169018707 4 500
55 55 -30.82457650 -7.398346555 4 750
56 56 -23.56898920 -5.265475270 4 760
57 57 -3.91708603 -0.810208045 0 350
58 58 -26.86563675 -4.251776497 0 440
59 59 -26.64738877 -1.675324623 8 450
60 60 -8.79897138 -0.134558536 11 830
61 61 -21.78250663 1.716077388 0 920
62 62 -28.98396759 6.007465815 24 980
63 63 -34.61607994 8.311853049 8 500
64 64 -25.63850107 7.453677191 15 880
65 65 -22.98762116 11.266290120 11 830
66 66 -33.48522130 19.100848030 0 350
67 67 -25.53096486 16.777135830 21 740
68 68 -18.95412327 15.681238150 0 300
69 69 -8.94874230 8.144324435 0 500
70 70 -10.91433241 10.579099310 4 750
71 71 -13.44807236 14.327310800 0 1090
72 72 -16.24086139 20.940019610 0 500
73 73 -17.51162097 24.111886810 0 940
74 74 -12.47496424 18.363422910 0 1020
75 75 -17.76118016 27.990410510 0 660
76 76 -5.54534556 9.730834410 0 850
77 77 -11.30971858 29.934766840 0 950
78 78 -10.38743785 27.493148220 0 740
79 79 -8.61491396 25.166312360 0 950
80 80 -3.40550077 14.197273530 0 710
81 81 -0.77957621 3.770246702 0 750
82 82 -3.01234325 21.186924550 0 1200
83 83 -2.05241931 32.685624900 0 1200
84 84 -2.26900366 36.128820600 0 970
85 85 0.82954518 5.790885396 0 850
86 86 22.08151130 19.671119440 19 870
87 87 12.60107972 23.864904860 0 1260
88 88 9.78406607 26.163968270 0 600
89 89 11.69995152 33.091322170 0 1090
90 90 20.64705880 -16.439632140 0 840
91 91 24.68314851 -21.314655730 0 1561
92 92 30.33133300 -27.235396100 0 1117
93 93 -26.24691654 -22.405635470 0 1040
94 94 -21.68016500 -24.458519270 10 1000
95 95 -1.57455856 -30.874986140 0 500
96 96 -29.75642086 -5.610894981 0 350
97 97 -3.66771076 26.448084810 0 900
98 98 -26.54457307 29.824419350 0 1050
99 99 -17.90426678 18.751297440 0 200
100 100 10.22894253 -6.274450952 0 880")
And I would like to create a visualization with the polygons of thiessen, then colorize the polygons according to their "valecono" value.
I tried this:
> library(deldir)
> z <- deldir(x,y,rw=c(-34.51608,30.7052719,-30.774986,36.2288206))
> w <- tile.list(z)
> plot(w, fillcol=data$valecono, close=TRUE)
Which seems weird to me, and I'm not sure how R attributed these colors.
Do you have any other suggestions for this case?
I also tried to convert my data.frame in SpatialPolygonsDataFrame, what I did not manage. I tried to convert my data.frame into SpatialPointsDataFrame, which was not a problem, but was not very useful, because I did not find how to convert it then to a SpatialPointsDataFrame.
spdf <- SpatialPointsDataFrame(coords = coords, data = data,
proj4string = CRS("+proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0"))
I try all this because I think that with a SpatialPointsDataFrame, it would be easier to have this visualization of polygons with colors according to the valecono of the points.
You can do
library(dismo)
coordinates(data) <- ~x + y
v <- voronoi(data)
spplot(v, "valecolo")
With base plot
s <- (floor(sort(v$valecono)/400) + 1)
plot(v, col=rainbow(60)[v$valecolo+1])
points(data, cex=s/2, col=gray((1:4)/4)[s])