SSRS - right align dynamic number of images - image

so i have an SSRS report that contains only one image control, that is getting image data from VARBINARY(MAX) column from the database.
I don't know how many images dataset query will fetch.
Currently, those images are shown one below another, losing precious whitespace to the right.
What i would like to do is to fit maximum possible number of images to the right, and go to the new line, when image size is higher than the remaining whitespace size on the right.
I've spent whole morning searching for this, but to no avail.

To answer my own question, solution is simple, create as many copies of image objects that you know will possibly fit horizontally. When you do that, for each image object, in Visibility pane, use this expression:
=IIF((RowNumber(Nothing) Mod 4) = 1, False, True)
where Mod 4 is number of image objects you have created. In every other image object, you are always adding one remainder to Mod 4 (so in another image object, you have Mod 4) = 2, Mod 4 = 3, etc. However, in the last image object, you use:
=IIF((RowNumber(Nothing) Mod 4) = 0, False, True)

Related

Adding Images Efficiently to a Google Spreadsheet

I am exploring using a GAS script to build a human-readable product catalogue as a Google Spreadsheet, it's easy to generate a PDF or print from there. The product data is all quickly accessible via an API including image URLs for each product.
I'm running into issues because inserting an image which references a URL, then re-sizing it takes 3-4 seconds in my prototype, and I might have 150x products. Runtime is capped at 6 minutes. Here's a simplified example of the image processing loop that I'm imagining:
function insertImages(sheet, array_of_urls) {
for (var i in array_of_urls) {
let image = sheet.insertImage(list_of_urls[i], 1, (i+1)*3);
image.setWidth(90);
image.setHeight(90);
}
}
I think it takes so long because of the interaction with the UI. Can anyone recommend a way that I could make the script functionally efficient?
Insert images over cells:
If you want the images over cells (that is, not contained in a specific cell), I don't think there's a way to make this significantly faster. There's no method to insert multiple images at once.
You could at most try to retrieve the image blobs, resize the images through some third party before inserting them, and finally insert them via insertImage(blobSource, column, row).
In any case, there are ways to get through the 6 minute execution time limit. See, for example, this answer.
Insert image in cells:
If you don't have a problem having the images in specific cells, and not over cells, I'd suggest adding the images via IMAGE formula, using setFormulas.
The image size can be set through the IMAGE formula, the following way:
=IMAGE("URL", 4, [height in pixels], [width in pixels])
Also, to make sure the cells' height is large enough for the images to be seen, you can use setRowHeights.
Code snippet:
function insertImages(sheet, array_of_urls) {
const formulas = array_of_urls.map(url => ["=IMAGE(\"" + url + "\", 4, 90, 90)"]);
const firstRow = 1;
sheet.getRange(firstRow,1,formulas.length,formulas[0].length).setFormulas(formulas);
sheet.setRowHeights(firstRow, formulas.length, 90);
}

Use of torchvision.utils.save_image twice on the same image tensor makes the second save not work. What's going on?

(Fast Gradient Sign Attack method detailed here: https://pytorch.org/tutorials/beginner/fgsm_tutorial.html)
I have a trained classifier with >90% accuracy which I am using to create these adversarial examples, then I am using torchvision.utils.save_image to save the images to different folders.
The folder hierarchy is as follows:
FOLDER_1
original_image.jpg (1)
perturbed_image.jpg (2)
FOLDER_2
perturbed_image.jpg (3)
Here (2) and (3) are the same image tensor, which is the sum of the original image tensor and a perturbation image tensor---I just want to save images that fooled the classifier twice. What I'm finding is that (1) and (2) print O.K., but (3) only prints the perturbation image tensor (it subtracted the original image tensor!). So when I open up (2) I see my original picture with all the noise on top (random RGB pixel switches from the FGSM attack), but when I open up the (3) I see a blank canvas with those random RGB pixel switches ONLY.
Since I am printing the same variable (perturbed_data) twice, I don't understand why torchvision.utils.save_image is choosing to subtract the perturbation image tensor the second time I call it. The code for what I'm describing is below, and data is the original image tensor.
epsilon = 0.5
# Collect datagrad
data_grad = data.grad.data
# Call FGSM Attack
perturbed_data = fgsm_attack(data, epsilon, data_grad)
# Re-classify the perturbed image
perturbed_output = model(perturbed_data)
perturbed_output = torch.sigmoid(perturbed_output)
perturbed_output = perturbed_output.max(1, keepdim=True)[1]
max_pred = perturbed_output.item()
final_pred = torch.tensor([0, 0]).to(device)
final_pred[max_pred] = 1
# Store all original and perturbed images, regardless of model prediction
torchvision.utils.save_image(data, "./FOLDER_1/original.jpg")
torchvision.utils.save_image(perturbed_data, "./FOLDER_1/perturbed_image.jpg")
# If the perturbed image fools our classifier, put a copy of it in FOLDER_2
if !torch.all(torch.eq(final_pred, target)):
torchvision.utils.save_image(perturbed_data, "./FOLDER_2/perturbed_image.jpg")
I'm almost sure that this is a torchvision bug, but I thought I would ask here before submitting a bug report. Maybe someone sees something I don't. I've also attached an example of (2) and (3) for visualization. The first image is in the correct format, but the second one prints without the original image tensor.
It turns out torchvision.utils.save_image modifies the input tensor. A workaround to this is to add a line somewhere before calling torchvision.utils.save_image that's similar to this:
perturbed_data_copy = perturbed_data
Then you can safely save the perturbed image twice if on the second call you use perturbed_data_copy instead of the perturbed_data (which was modified by torchvision.utils.save_image). I will be submitting a bug report and tagging this post. Thanks #Mat for pointing this out!
I had this issue too, and it stemmed from torchvision only saving the second tensor. So I had two (image) tensors (image1 + image2) added together to form a new tensor (image3), but when I saved the new tensor (image3) a second time, it would save only as the second tensor (image2) in the sum.
PyTorch had a PR to fix this sometime late last year.

Display .bin depth image in Matlab

Here is the .bin image which I saved in c#. Please anyone help to view this in MATLAB. the dimension of image 424 x 512.
I have tried this code, but its not working correctly
file = fopen('test0.bin', 'r');
A = fread(file, 424*512, 'uint16=>uint16');
A1 = reshape(A, 424, 512);
imagesc(A1)
Before doing downmark please tell me the reason so j can update this
There are row major and column major programming languages. To simplify, which is the second element in memory? First column second row or second column first row? There is no "right" answer, thus there are programming languages using the one or the other. This is the major problem in your case. If you make an error here, the image looks like the one you got.
To fix the problem with row and column major, you have to use:
A1 = reshape(A, 512, 424).';
This swaps rows and columns to get a row-major behaviour, then transposes to turn the image right.

reading SVM training dataset

I want to read a training image set for SVM training. This is the code
%Location of the image.
Class1 = 'Training/11';
% load the dataset
dirList = dir(fullfile(Class1,'*.ppm'));
%dirList
files={dirList.name}';
The type of files that I got is of type cell. How I can access those images to perform something, like show it and do feature extraction??
When I tried to show it:
figure, imshow(files)
I got this error
Error using imageDisplayValidateParams
Expected input number 1, I, to be one of these types:
double, single, uint8, uint16, uint32, uint64, int8, int16, int32, int64,
logical
Instead its type was cell.
Error in imageDisplayValidateParams (line 12)
validateattributes(common_args.CData, {'numeric','logical'},...
Error in imageDisplayParseInputs (line 79)
common_args = imageDisplayValidateParams(common_args);
Error in imshow (line 220)
[common_args,specific_args] = ...
Do you know how to access and do some processing of these images in files?
MY FOLDER DIRECTORY!!
MY DIRECTORY
Inside my training Folder
First off, imshow requires an actual image as its input. You are specifying a cell array of strings. On top of that, you can only show one image at a time. Try accessing individual cell elements instead and using those to read in an image and display them on the screen.
im1 = imread(files{1}); % Read in first image
imshow(im1); % Show the first image
figure;
im2 = imread(files{2}); % Read in second image
imshow(im2); % Show the second image
If you want to display all of them, you could try using a combination of imshow and subplot.
Let's say you had 9 images, and wanted to organize them in a 3 x 3 grid. You could do something like:
figure;
for i = 1 : 9
subplot(3, 3, i);
im = imread(files{i});
imshow(im);
end
Now for performing feature extraction, my suggestion is that you take a look at the Computer Vision toolbox that is accompanied with MATLAB. There is a whole suite of tools that performs feature extraction for you. Things like MSER, SURF, HOG and methods to match keypoints between pairs of images.
Check this link out: http://www.mathworks.com/products/computer-vision/code-examples.html

Manipulating subsections of an array

I am using R to plot trying to conditionally change parts of an array
based on the columns of the array.
I have worked out the following steps:
x<-array(1,dim=c(4,4,3))
r<-x[,,1]
g<-x[,,2]
b<-x[,,3]
r1<-apply(r[,2:3],1:2,function(f){return(0)})
g1<-apply(g[,2:3],1:2,function(f){return(0)})
b1<-apply(b[,2:3],1:2,function(f){return(0)})
r3<-cbind(r[,1],r1,r[,4])
g3<-cbind(g[,1],g1,g[,4])
b3<-cbind(b[,1],b1,b[,4])
# Pass to pixmapRGB
This works, but as I am new to R, I was wondering if
there was a more efficient way to manipulate parts
of an array.
For example, does apply know which element it is working on?
The bigger picture is that I want to graph a time-series scatter
plot over many pages.
I would like to have a thumbnail in the corner of the page that is
a graph of the whole series. I would like to color a portion of
that thumbnail a different color to indicate what range the
current page is examining.
There is alot of data, so it is not feasible to redraw a new plot
for the thumbnail on every page.
What I have done is to first write the thumbnail plot out to a tiff file.
Then I read the tiff file back in, used getChannels from pixmap
to break the picture into arrays, and used the above code to change
some of the pixels based on column.
Finally I then print the image to a viewport using
pixmapRGB/pixmapGrob/grid.draw
It seems like alot of steps. I would be grateful for any pointers
that would help me make this more efficient.
Maybe I don't understand your question, but if what you're trying to do is just "change some pixels based on column," why don't you just use the basic array indexing to do that?
This will do the same thing you have posted:
x<-array(1,dim=c(4,4,3))
r<-x[,,1]
g<-x[,,2]
b<-x[,,3]
r[,2:3]=0
g[,2:3]=0
b[,2:3]=0
Is that helpful?
Perhaps more of a comment than an answer, but when I try to plot over a number of pages I usually go left to right, breaking up the plots into quantiles and setting appropriate xlim (or ylim)
x <- rnorm(100000)
y <- rnorm(100000)
df <- data.frame(x,y)
seq1 <- quantile(df$x, probs = seq(0,1,0.1))
seq2 <- quantile(df$x, probs = seq(0,1,0.1))
for(x in 1:(length(seq1)-1)) {
plot(df, xlim=c(seq1[x],seq1[x+1]))
}
No idea how to overlay a thumbnail onto the graphs although I think you could do this with one of the rimage functions if you saved the thumbnail.
You could avoid having to read and paste a tiff thumbnail by actually replotting the whole chart at reduced scale. check out par(fig) , and then do something like
Rgames: plot(1:2,1:2)
Rgames: par(mar=c(.1,6,.1,.1),new=T,fig=c(0,.25,.5,.75))
Rgames: plot(1:2,1:2)
Rgames: polygon(c(1,2,2,1),c(1,1,2,2),col='red')
("Rgames:" is my prompt)
You'll have to play a bit with the margin values, but this will get your "mini-graph" set up.

Resources