Is it possible to create a collapsible image (similar to reddit) inside a PDF document?
The book is around 500 pages and has many large images. Hyperlinking wouldn't work very well because it will have to lead the reader to another page. Going back and finding where you stopped can be annoying.
If this isn't possible, I'm open to other suggestions. The goal is streamlined reading and viewing images.
Related
I'm trying to create a tool to rasterise vector images—stored in PDF files—on macOS, but the resulting images contain artifacts around the edges of some shapes. Preview.app, on the other hand, always renders the PDF flawlessly, as shown in this example:
I've tried:
Loading the PDF document using PDFKit, and rendering the page using both draw(with:to:) and thumbnail(of:for:)
Loading the PDF document into an NSImage (which creates an NSPDFImageRep), and using cgImage(forProposedRect:context:hints:)
In both cases I get these aliasing-like artifacts as seen on the left-hand-side of the image above. The PDF file is out of my control, so can't be changed to fix any issues it might have. I'm currently trying to migrate away from Cairo (which renders correctly) to Apple's PDF rendering for performance reasons (PDFKit renders it much more quickly, albeit with these artifacts).
Is there anything I've missed which would fix the output?
So it looks like the issue was caused due to me rasterising PDFs on multiple threads (specifically my tool rasterises PDFs in multiple resolutions, so I thought why not simultaneously).
Performing the operations sequentially on the main thread instead fixed it. I thought that I had come up with a way to use it concurrently by initialising the CGContext manually (instead of using NSImage's lockFocus()/unlockFocus() and NSGraphicsContext.current), but alas, as I soon as I add a context.scaleBy (to generate the images at different sizes), it fails again.
So for now I'm just doing it on the main thread until another solution comes along.
I have (or, rather, will soon have) a number of maps created in ArcGIS 10.0 and exported as PDF documents. The maps all show contiguous areas, being rather like the pages in a map book. There will also be a smaller-scale map depicting the entire area (let's call it the "study area"), but with less detail, rather like that page of a map atlas that shows what page depicts what area.
I wonder if there is any way to create thumbnails of the larger-scale maps and mosaic them such as to create an index map of the study area. A user would then be able to see, for a particular point on the smaller-scale map, which of the larger-scale maps depicts that part of the study area. (And perhaps see that map by clicking on the larger map?) Does anyone have any ideas I can implement this? I would prefer exporting the maps in PDF format, but, if I can't do all of the above with PDF, then any other format to which a map can be exported from ArcGIS, such as JPG or TIF, will work.
You should be able to create a PDF which does this.
What you need to do is render each page to a small image.
Then collect each of these images and add them as a mosaic to an index page.
Then put links from each small image back to the original PDF page.
If the hierarchy was more than one level deep you could repeat the process.
You need a PDF component to do this. What you want in terms of features is something which does decent PDF rendering. It's an easy thing to do badly and a difficult thing to do well.
ABCpdf .NET does good quality rendering so it's what I would suggest, but then I would because I work on it. :-)
I have a question.
I am currently developing a website and I am nearly finished with the gallery section. The gallery section will display either a) a pdf file or b) several pngs(these pngs will be displayed under each other, it will look just like the pdf) when a user clicks on one of the gallery pictures.
I have the option of choosing between as said PDF or png.
The problem is the pdf files are up towards 5mb and I cannot do anything about this. I was thinking of preloading the pdf files or pngs in advance.
I am looking for one thing: site preformance.
Is it better to preload the one pdf (there will be up towards 20-30 pdfs in this gallery, so I will be preloading 20-30 pdf's) or is it better to go with the png approach (20-30*x amount of pages).
Or if anyone else has a better idea?
The best criteria will be to try, measure and see for yourself.
In any case, you might try to reduce the load by finding balance between amount of pre-loaded data and simplicity and load.
In case of PNGs you might try to split images into smaller parts and preload only images required to display one or two first pages. This will allow visitor to quickly open those pages (he might not go farther after all).
In case of PDFs you might linearize documents (optimize them for Fast Web View). For linearized documents you might try to preload beginning of the file only. There is a question about finding how much you should preload. Adobe plugin opens linearized files faster then non-linearized. I am not aware about other PDF plugins that make use of this optimization, though.
I use the MediaWiki API to find images of Wikipedia articles. However, I also get all the useless icons, like the broom for when a article needs to be cleaned up or the creative commons logo that marks something to be placed under a creative commons license.
Is there a way to detect which images are such icons so I can drop them? E.g. is there a way to query the size at which the image was embedded (rather then the size of the original image, which might be huge even for icons) so that I can drop all small ones. I'm not really interested in very small images anyway.
As far as I know, no. That information is simply not stored in the database, and is therefore also not available via the API.
Some things you could perhaps do include:
Load the HTML markup of the article (via the API action=parse, or simply via index.php with action=render) and extract the image sizes from it.
Simply build a list of images that should be excluded. You could do this programmatically (e.g. find all images used on all templates included in Category:Wikipedia maintenance templates and all its subcategories) or just add any unwanted images to the exclusion list as you come across them.
I am a scriptmonkey working with a lot of graphic designers who know not a thing about the web.
Despite my objections I frequently find myself with problems such as a 100Kb background image, several textual items they have made into glossy images, and 3 separate lengthy FLVs loading into a page etc etc.
I would really like to define a stack to control the flow of items loading. Eg, render the background, then the HTML, then the page images, then load the FLVs.
I assume this exists and I have been searching badly.
Can anyone point me to good resources on this?
For the images you can use the load event of the images to know when they are done. So you can chain loading this way. Look this: https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5214317.html