How to prove the density matrix after measurement? - measurement

enter image description hereI always take this equation as granted.
rhok = Ek rhok Ek_dagger /Tr(Ek rhok Ek_dagger)
I try to verify that EkEk = CkEk but failed

Related

Timevariation in Open air R-package

Usually, when using timevariation function for variation in pollutant concentration with time in openair R package, the hour of the day (on the x axis) spans from 0:00Hr to 23:00Hr.
For example;
timeVariation(filter(MetConc2),
pollutant = "PM2.5", ylab = "PM2.5 (ug/m3)")
But, I need to plot a graph indicating the time frame from 6:00Hr to 5:00Hr, please what code can I use for it. Can anyone help me out.

logits and labels must be broadcastable: logits_size=[82944,2] labels_size=[90000,2]

I am working for a project of semantic segmentation of retinal blood vessels with Tensorflow with the MobileUNet model and I have received this error:
InvalidArgumentError (see above for traceback): logits and labels must
be broadcastable: logits_size=[82944,2] labels_size=[90000,2]
[[Node: softmax_cross_entropy_with_logits_sg = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT,
_device="/job:localhost/replica:0/task:0/device:CPU:0"](softmax_cross_entropy_with_logits_sg/Reshape,
softmax_cross_entropy_with_logits_sg/Reshape_1)]]
Here my code is as follows:
network=network = build_mobile_unet(net_input, preset_model = args.model, num_classes=num_classes)
net_input = tf.placeholder(tf.float32,shape=[None,None,None,3])
net_output = tf.placeholder(tf.float32,shape=[None,None,None,num_classes])
losses = tf.nn.softmax_cross_entropy_with_logits(logits=network, labels=net_output)
cost = tf.reduce_mean(losses)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
init = tf.initialize_all_variables()
_,current=sess.run([opt,cost],feed_dict={net_input:input_image_batch, net_output:segmented_image_batch})
The input image is 300x300, and is in the RGB colour-space. The output is a binary image with the same size as input.
Can someone help me?
We answered this problem which is also related to architecture Understand this in following link
Input to reshape is a tensor with 37632 values, but the requested shape has 150528
Let us know if you face any issue
The same problem occured with me.This comes when we use label_size more than number of the classes in our dataset.
In the last fully connected layer(Dense) I had used 46 but in my dataset there was only 38 classes.So when I used 38 instead of 46,problem solved.

compute accuracy of validation set

I saved train and validation set as tfrecord file. Inference gives input images and returns logits. loss and accuracy compute loss and accuracy as well. Using this code network is trained well(train set accuracy increases and loss decreases). But accuracy of validation set is almost fixed. By tensorboard, I found that computing accuracy of validation set create new graph that doesn't use the main graph's weights. How can I predict accuracy on validation set simultaneously?
def run_training():
train_images,train_labels = read_and_decode_tfrecord_train(train_data_path)
val_images,val_labels = read_and_decode_tfrecord_validation(validation_data_path)
train_images = tf.cast(train_images,tf.float32)/255.
val_images = tf.cast(train_images,tf.float32)/255.
batch_Xs,batch_Ys=tf.train.shuffle_batch([train_images,train_labels],batch_size=500,capacity=500,min_after_dequeue=100)
batch_xs,batch_ys=tf.train.shuffle_batch([val_images,val_labels],batch_size=500,capacity=500,min_after_dequeue=100)
logits=inference(batch_Xs,1)
total_loss = loss(logits,batch_Ys)
train_op = training(total_loss,learning_rate=LEARNING_RATE)
accuracy = evaluation(logits,batch_Ys)
val_logits=inference(batch_xs,1)
val_accuracy = evaluation(val_logits,batch_ys)
saver = tf.train.Saver(tf.all_variables(), max_to_keep=4,)
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
tf.train.start_queue_runners(sess=sess)
for i in range(NUM_ITER):
_,loss_value,acc=sess.run([train_op,total_loss,accuracy])
if i%10==0:
val_acc,testing_summary_accuracy=sess.run([val_accuracy,testing_summary])
print 'Iteration:',i, ' Loss:',loss_value,' Train Accuracy:',acc,' Validation Accuracy:',v

Matlab image filtering without using conv2

I've been given a task to create image filtering function for 3x3 matrices, and its outcome must be equal to conv2's. I have written this function, but it filters image incorrectly:
function [ image ] = Func134( img,matrix )
image=img;
len=length(img)
for i=2:1:len-1
for j=2:1:len-1
value=0;
for g=-1:1:1
for l=-1:1:1
value=value+img(i+g,j+l)*matrix(g+2,l+2);
end
end
image(i,j)=value;
end
end
i=1:1:length
image(i,1)=image(i,2)
image(i,len)=image(i,len-1)
image(1,i)=image(2,i)
image(len,i)=image(len-1,i)
end
Filtration matrix is [3,10,3;0,0,0;-3,-10,-3]
Please help to figure out what is wrong with my code.
Some sample results I get between conv2 and my code are seen below.
First off, this line doesn't make sense:
i=1:1:length;
I think you meant to use len instead of length as the ending index:
i=1:1:len;
Now referring to your code, it is correct, but what you are doing is correlation not convolution. In 2D convolution, you have to perform a 180 degree rotation of the kernel / mask and then do the weighted sum. As such, if you want to achieve the same results using conv2, you must pre-rotate the mask before calling it.
mask = [3,10,3;0,0,0;-3,-10,-3]
mask_flip = mask(end:-1:1,end:-1:1);
out = conv2(img, mask, 'same');
mask_flip contains the 180 degree rotated kernel. We use the 'same' flag to ensure that the output size of the result is the same size as the input. However, when using conv2, we are assuming that the borders of the image are zero-padded. Your code simply copies the border pixels of the original image into the resulting image. This is known as replicating behaviour but that is not what conv2 does natively. conv2 assumes that the border pixels are zero-padded as I mentioned before, so what I would suggest you do is create two additional images, one being the output image that has 2 more rows and 2 more columns and another being the input image that is the same size as the output image but you place the input image inside this matrix. Next, perform the filtering on this new image, place the resulting filtered pixels in the output image then crop this result. I've decided to create a new padded input image in order to keep most of your code intact.
I would also recommend that you abolish the use of length here. Use size instead to determine the image dimensions. Something like this will work:
function [ image ] = Func134( img,matrix )
[rows,cols] = size(img); %// Change
%// New - Create a padded matrix that is the same class as the input
new_img = zeros(rows+2,cols+2);
new_img = cast(new_img, class(img));
%// New - Place original image in padded result
new_img(2:end-1,2:end-1) = img;
%// Also create new output image the same size as the padded result
image = zeros(size(new_img));
image = cast(image, class(img));
for i=2:1:rows+1 %// Change
for j=2:1:cols+1 %// Change
value=0;
for g=-1:1:1
for l=-1:1:1
value=value+new_img(i+g,j+l)*matrix(g+2,l+2); %// Change
end
end
image(i,j)=value;
end
end
%// Change
%// Crop the image and remove the extra border pixels
image = image(2:end-1,2:end-1);
end
To compare, I've generated this random matrix:
>> rng(123);
>> A = rand(10,10)
A =
0.6965 0.3432 0.6344 0.0921 0.6240 0.1206 0.6693 0.0957 0.3188 0.7050
0.2861 0.7290 0.8494 0.4337 0.1156 0.8263 0.5859 0.8853 0.6920 0.9954
0.2269 0.4386 0.7245 0.4309 0.3173 0.6031 0.6249 0.6272 0.5544 0.3559
0.5513 0.0597 0.6110 0.4937 0.4148 0.5451 0.6747 0.7234 0.3890 0.7625
0.7195 0.3980 0.7224 0.4258 0.8663 0.3428 0.8423 0.0161 0.9251 0.5932
0.4231 0.7380 0.3230 0.3123 0.2505 0.3041 0.0832 0.5944 0.8417 0.6917
0.9808 0.1825 0.3618 0.4264 0.4830 0.4170 0.7637 0.5568 0.3574 0.1511
0.6848 0.1755 0.2283 0.8934 0.9856 0.6813 0.2437 0.1590 0.0436 0.3989
0.4809 0.5316 0.2937 0.9442 0.5195 0.8755 0.1942 0.1531 0.3048 0.2409
0.3921 0.5318 0.6310 0.5018 0.6129 0.5104 0.5725 0.6955 0.3982 0.3435
Now running with what we talked about above:
mask = [3,10,3;0,0,0;-3,-10,-3];
mask_flip = mask(end:-1:1,end:-1:1);
B = Func134(A,mask);
C = conv2(A, mask_flip,'same');
We get the following for your function and the output of conv2:
>> B
B =
-5.0485 -10.6972 -11.9826 -7.2322 -4.9363 -10.3681 -10.9944 -12.6870 -12.5618 -12.0295
4.4100 0.1847 -2.2030 -2.7377 0.6031 -3.7711 -2.5978 -5.8890 -2.9036 2.7836
-0.6436 6.6134 4.2122 -0.7822 -2.3282 1.6488 0.4420 2.2619 4.2144 3.2372
-4.8046 -1.0665 0.1568 -1.5907 -4.6943 0.3036 0.4399 4.3466 -2.5859 -3.4849
-0.7529 -5.5344 1.3900 3.1715 2.9108 4.6771 7.0247 1.7062 -3.9277 -0.6497
-1.9663 2.4536 4.2516 2.2266 3.6084 0.6432 -1.0581 -3.4674 5.3815 6.1237
-0.9296 5.1244 0.8912 -7.7325 -10.2260 -6.4585 -1.4298 6.2675 10.1657 5.3225
3.9511 -1.7869 -1.9199 -5.0832 -3.2932 -2.9853 5.5304 5.9034 1.4683 -0.7394
1.8580 -3.8938 -3.9216 3.8254 5.4139 1.8404 -4.3850 -7.4159 -4.9894 -0.5096
6.4040 7.6395 7.3643 11.8812 10.6537 10.8957 5.0278 3.0277 4.2295 3.3229
>> C
C =
-5.0485 -10.6972 -11.9826 -7.2322 -4.9363 -10.3681 -10.9944 -12.6870 -12.5618 -12.0295
4.4100 0.1847 -2.2030 -2.7377 0.6031 -3.7711 -2.5978 -5.8890 -2.9036 2.7836
-0.6436 6.6134 4.2122 -0.7822 -2.3282 1.6488 0.4420 2.2619 4.2144 3.2372
-4.8046 -1.0665 0.1568 -1.5907 -4.6943 0.3036 0.4399 4.3466 -2.5859 -3.4849
-0.7529 -5.5344 1.3900 3.1715 2.9108 4.6771 7.0247 1.7062 -3.9277 -0.6497
-1.9663 2.4536 4.2516 2.2266 3.6084 0.6432 -1.0581 -3.4674 5.3815 6.1237
-0.9296 5.1244 0.8912 -7.7325 -10.2260 -6.4585 -1.4298 6.2675 10.1657 5.3225
3.9511 -1.7869 -1.9199 -5.0832 -3.2932 -2.9853 5.5304 5.9034 1.4683 -0.7394
1.8580 -3.8938 -3.9216 3.8254 5.4139 1.8404 -4.3850 -7.4159 -4.9894 -0.5096
6.4040 7.6395 7.3643 11.8812 10.6537 10.8957 5.0278 3.0277 4.2295 3.3229

RGB to norm rgb transformation. Vectorizing

I'm writing a piece of code that has to transform from an RGB image to an rgb normalized space. I've got it working with a for format but it runs too slow and I need to evaluate lots of images. I'm trying to vectorize the full function in order to faster it. What I have for the moment is the following:
R = im(:,:,1);
G = im(:,:,2);
B = im(:,:,3);
r=reshape(R,[],1);
g=reshape(G,[],1);
b=reshape(B,[],1);
clear R G B;
VNormalizedRed = r(:)/(r(:)+g(:)+b(:));
VNormalizedGreen = g(:)/(r(:)+g(:)+b(:));
VNormalizedBlue = b(:)/(r(:)+g(:)+b(:));
NormalizedRed = reshape(VNormalizedRed,height,width);
NormalizedGreen = reshape(VNormalizedGreen,height,width);
NormalizedBlue = reshape(VNormalizedBlue,height,width);
The main problem is that when it arrives at VNormalizedRed = r(:)/(r(:)+g(:)+b(:)); it displays an out of memory error (wich is really strange because i just have freed three vectors of the same size). Were is the error? (solved)
Its possible to do the same process in a more efficiently way?
Edit:
After using Martin sugestions I found the reshape function was not necessary, being able to do the same with a simple code:
R = im(:,:,1);
G = im(:,:,2);
B = im(:,:,3);
NormalizedRed = R(:,:)./sqrt(R(:,:).^2+G(:,:).^2+B(:,:).^2);
NormalizedGreen = G(:,:)./sqrt(R(:,:).^2+G(:,:).^2+B(:,:).^2);
NormalizedBlue = B(:,:)./sqrt(R(:,:).^2+G(:,:).^2+B(:,:).^2);
norm(:,:,1) = NormalizedRed(:,:);
norm(:,:,2) = NormalizedGreen(:,:);
norm(:,:,3) = NormalizedBlue(:,:);
I believe you want
VNormalizedRed = r(:)./(r(:)+g(:)+b(:));
Note the dot in front of the /, which specifies an element-by-element divide. Without the dot, you're solving a system of equations -- which is likely not what you want to do. This probably also explains why you're seeing the high memory consumption.
Your entire first code can be rewritten in one vectorized line:
im_normalized = bsxfun(#rdivide, im, sum(im,3,'native'));
Your second slightly modified version as:
im_normalized = bsxfun(#rdivide, im, sqrt(sum(im.^2,3,'native')));
BTW, you should be aware of the data type used for the image, otherwise one can get unexpected results (due to integer division for example). Therefore I would convert the image to double before performing the normalization calculations:
im = im2double(im);

Resources