Empty PowerPoint chart size is massive - Why? - powerpoint

I created a macro that updates around 55 PowerPoint slides, ranging from populating tables to updating line and bar charts. The macro works well, however, for some reason the PowerPoint file size has increased significantly. While working on the macro the size was around 80,000 KB, after making very few minor changes it suddenly close to doubled to 150,000 KB. To find out which slides cause this huge size, I published the slides to see the individual slide sizes and am able to narrow down the problem. Due to the large variety of charts I will focus on one kind.
I have 2 regular line charts on one slide and the size is 5000+ KB! Whenever I delete one of the two, the size is reduced to roughly half the size.
I have taken the following steps to try to find the problem:
1) Removed and deleted all cells that the chart references to (inside the PowerPoint) -- No change in file size.
2) Removed all chart features, such as axis title, legends, etc -- No change in file size.
3) Slide is not macro enabled and has therefore no macro included in the file.
4) Made sure there are no hidden objects.
All that is left is an empty 'Chart Placeholder' with no data in the XL file and yet the size is very large.
The PowerPoint slide contains no images either. A regular PowerPoint slide with a line chart should only have a size of around 50-100 KB and I am not sure how the chart has such a massive size.
First time posting my question here! Hopefully someone can help out.
Thanks!
UPDATE:
I finally was able to find the problem. For some reason, all charts had the maximum number of rows open (1+ million rows) making the file size that large!
I added: wb.worksheets(1).UsedRangeto the end of each procedure and now the entire file size is around 4000 KB!
Thank you.

Related

How to increase number of displayed rows in Oracle Reports 6i?

I have a tabular report in Oracle Reports 6i, and when I print it, it produces a large margin at the bottom and continues to print the remaining rows/records on the next page. The orientation of the printout is portrait. I increased the number of maximum records per page, but when it prints it won't go beyond the number of rows it's been printing, and it's like the change is of no effect.
When in Paper Layout editor, check the margin layout. Thick black line shows "usable" part of the report - check whether it is stretched through the whole paper size.
Also, see whether paper size is correctly set. For example, for portrait A4 paper, it should be 21 cm wide and 29.7 cm high.
Furthermore, data is contained within a frame; actually, two of them (repeating and ... well, the one that encloses it; its default name begins with an M). See their vertical elasticity properties. Should be either variable or expand (maybe one (or both) of them are now fixed).

Better way to display a radar ppi?

I am using Qt 4.8.6 to display multiple radar videos.
For now i am getting about 4096 azimuths (360°) per 2.5 seconds and video.
I display my image using a class inherited from QGraphicsObject (see here), using one of the RGB-Channels for each video.
Per Azimuth I get the angle and an array of 8192 rangebins and my image has the size of 1024x1024 pixels. I now check for every pixel (i am going through every x-coordinate and check the max y- and min y-coordinate for every azimuth and pixel coordinate), which rangebins are present at that pixel and write the biggest data into my image-array.
My problems
The calculating of every azimuth lasts about 1ms, which is way too slow. (I get two azimuths every about 600 microseconds, later there may be even more video channels.)
I want to zoom and move my image and for now have thought about two methods to do that:
Using an image array in full size and zoom and move the QGraphicsscene directly/"virtual"
That would cause the array to have a size of 16384x16384x4 bytes, which is way too big (i can not manage to allocate enough space)
Save multiple images for different scalefactors and offsets, but for that I would need my transforming algorithm to calculate multiple times (which is already slow) and causing the zoom and offset to display only after the full 2.5 seconds
Can you think of any better methods to do that?
Are there any standard rules, how I can check my algorithm for better performance?
I know that is a very special question, but since my mentor is not at work for the next days, I will take a try here.
Thank you!
I'm not sure why you are using a QGraphicsScene for the scenario you are doing. Have you considered turning your data into a raster image, and presenting the data as a bitmap?

XSLT create complex SVG visualisation minimizing line crossings etc

This is no actual single coding problem, rather a problem of the right approach to a complex issue.
So, I have built a rather complex svg visualisation of my XML data using xslt. It looks like this:
(source: erksst.de)
This is just a small sample of the whole data. There are two or three rows. Each row could contain up to 160 yellow boxes.
(The yellow boxes are letter collections, the blue/grey boxes single letters, the lines represent their way of dissemination.)
It works well so far but I want to optimize it:
(1) minimize the number of line crossing
(2) minimize the number of lines crossing a blue/grey box
(3) minimize the lines being too near to another line.
To achieve this there are things to vary:
(a) The broadest row (in the sample it is the third) is fix. It can't be moved. But the other (two) can be moved in the range of the width of the broadest row. I.e. in my example the yellow box of the second row could be moved some 160 pixels to the right.
(b) Furthermore, in the two smaller rows the margin between the yellow boxes could be varied. In my example there is just one per line. But of course there could be more than one yellow box in the two smaller rows.
(c) The order of the yellow boxes within a row could be altered.
So, many possibilites to realize this visualisation.
The problem is the performance time.
I have started with the line crossing problem by using a function which kind of pre-builds the visualisation and calculates the number of crossings.
The variation with the smallest number of crossings is actually built in the output.
The problem is the time it needs. The transformation with just 100 possibilites and my hole XML data took 90 seconds. Doesn't sound like much, but taking into account that 100 variations are just a very small part of all theoritically possible options and that the visualition should at some point in the future build on the fly on server for a user's selection of the data 90 seconds simply is way too much.
I have allready reduced the visualition template for the calculate line crossings functions to all what is necessary leaving asside all captions and so on. That did help, but not as much as expected.
The lines are drawn as follows: First, all boxes are drawn keeping their id from the original data. Then I go back to my data, look where connections are and build the lines.
You could transform your XML into the DOT language (plain TXT format) by XSLT and process it by GraphViz. I solved some similar issue (although not so huge as yours seems to be) this way.

How to most efficiently fill a printed page with 1 main image and scaled padding images?

I am printing panoramas with aspect ratios between 1.7:1 and 8:1. A common case is an image which scales to 18" x 8" leaving 4 inches on a standard 12x18 inch page. Paper only comes in so many sizes, but my images can be almost any size.
I hate to waste a 4x18 inch print space and I have been eyeballing other stock images which can be scaled to fill up most of the room with minimal whitespace or distortion. I just put my website and copyright on them and use them as business cards.
Every time I do a print job, I have to find new, creative combinations. It seems as though somebody has already skinned this cat. I have searched for a few hours and can't find any reference to the algorithm.
I have written a tiny, half-fast (?) script which takes an image file spec and a page size, scales it to fit the page, calculates the whitespace and gives me the aspect ratio of the whitespace. I then look for some stock image with roughly that AR and scale it to fit.
If no single image fits, then I look for 2 which will fit. I have gone as far as to use 3. What a recurring pain.
Why would you spend hours trying to save a couple dollars worth of paper? Some of the prints are fairly high volume and some are printed at 3+ feet so it can add up. Plus, I get free gimmes I can hand out to put real product in peoples' hands.
And, some of the panoramas take up to 34 HDR images, each of which has to be hand Photoshopped to match and each HDR image is usually 7, 36 MPix, D800E images. Yes, it takes as many as 238 individual NEF files and trillions of CPU cycles to make one gigantic panorama.
I already have many hours invested in each picture, and once I make a packed master, I can use it from then on, scaling to fit. Converting ~30% waste into useful product adds directly to the bottom line.
Once the grunt calculations and combinations are done, it would be quite easy to use ImageMagick to scale them on the fly and flash them in front of you so you could pick the one you like best. It could be entertaining.
Most importantly, it is an intellectual challenge, a good solution of which has so far eluded me.
Any ideas?

Optimize the SVG output from Gnuplot

I've been trying to plot a dataset containing about 500,000 values using gnuplot. Although the plotting went well, the SVG file it produced was too large (about 25 MB) and takes ages to render. Is there some way I can improve the file size?
I have vague understanding of the SVG file format and I realize that this is because SVG is a vector format and thus have to store 500,000 points individually.
I also tried Scour and re-printing the SVG without any success.
The time it takes to render you SVG file is proportional to the amount of information in it. Thus, the only way to speed up rendering is to reduce the amount of data
I think it is a little tedious to fiddle with an already generated SVG file. I would suggest to reduce the amount of data for gnuplot to plot.
Maybe every or some other reduction of data can help like splitting the data into multiple plots...
I would recommend keeping it in vector graphic format and then choosing a resolution for the document that you put it in later.
Main reason for doing this is that you might one day use that image in a poster (for example) and print it at hundreds of times the current resolution.
I normally convert my final pdf into djvu format.
pdf2djvu --dpi=600 -o my_file_600.djvu my_file.pdf
This lets me specify the resolution of the document as a whole (including the text), rather than different resolutions scattered throughout.
On the downside it does mean having a large pdf for the original document. However, this can be mitigated against if you are latex to make your original pdf - for you can use the draft option until you have finished, so that images are not imported in your day-to-day editing of the text (where rendering large images would be annoying).
Did you try printing to PDF and then convert to SVG?
In Linux, you can do that with imagemagick, which you even be able to use to reduce the size of your original SVG file.
Or there are online converters, such as http://image.online-convert.com/convert-to-svg

Resources