I am compiling a latex file on a server and download from there the generated dvi, ps and pdf files to view them.
The latex file by \includegraphics includes some figure files which are not on my local machine. I found that dvi file generated by latex command does not show the figures after dowloaded to my local, but ps file generated by dvips -Ppdf have the figures, and pdf file generated by either ps2pdf or pdflatex seem not have the figures again. Is it because the figures are not actually embeded as part of the dvi and pdf files by those compilation commands? How to actually embed figures so we can only export the document files to other machines?
Are the case for tables in separated tex files included into main tex file by /input and the case for style files included by \usepackage similar to the above case for figure files included by \includegraphics ?
Background
Embedded images, like eps files, are not supported in Knuth's original dvi specification — not surprising since dvi is an older format than Postscript; instead, they are encoded using an extension capability that is accessed in Tex through the \special command and in dvi by some reserved codes.
dvips set a standard for how to use specials, which gave a very flexible means for including Postscript facilties in Tex, but not all dvi utilities fully support these.
Answer, part one: where are the figures?
All figures imported using \includegraphics are encoded in the dvi as Postscript using the \special code, so to the first part of your questions - yes, the image content is all in the dvi file. So you don't need to worry about where the files are that were used to generate the dvi.
However, different programs that process the dvi can do flaky things. dvips is the standard, and it's based on ghostscript, which has seen various kinds of flakiness over the years. Other dvi software has simply not supported specials, or supported them oddly.
Answer, part two: fixing things
Why do you think the figures are missing? Have you looked at the files in an on-screen viewer, or have you printed them out? It can often be that an on-screen viewer is flaky; that is rarely the case with the print-out.
Otherwise, look at the bug tracking systems for the various bits of software you are using. I've found pdftex insulates me from a lot of these issues; I've found ghostscript bugs to be particularly annoying to deal with over the years. Adobe Acrobat is clunky, but it is the standard and it's good for troubleshooting ps/pdf issues. Try other software.
you should look at the warnings. i am sure there some, because for ps you will need eps pictures and for pdflatex you'll need png or jpeg (try compiling at your local machine first, e.g. with kile or the gedit plugin)
I am using kile's compile commands for pdf generation which works always fine (pics incl.). they use pdflatex for that.
it depends a lot on the format of your picture files, and how those formats are handled by the various tools you use to convert dvi -> ps -> pdf
If you product ps, the best is to provide your figures in eps (encapsulated postscript) format, compile with latex and use dvips.
If you want to generate pdf, then provide your fgure in pdf or png format, and use pdflatex.
See
http://en.wikibooks.org/wiki/LaTeX/Importing_Graphics and
http://amath.colorado.edu/documentation/LaTeX/reference/figures.html
for good explanations of what happens.
The output of the latex command can be very informative about whant happens with your figures too.
Related
I am building an internal project wiki for a group software development project. The project wiki is currently powered by VimWiki and I send the HTML files to both the project supervisor and each of the development team on a weekly basis. This keeps our Intellectual property secure and internal, but also organized and up to date. I would like to put diagram images into the wiki itself so that all diagrams and documentation can be accessed together with ease. I am however having trouble making the images transferable between systems. Does vimwiki give a way for image files to be embedded such that they can be transferred between systems? Ideally the solution would make it possible to transfer the output directory of the Vimwiki as a singular entity containing the HTML files and the image files.
I have tried reading the documentation on images in the vimwiki reference document. I have not had luck using local: or file: variants. The wiki reference states that local should convert the image links to a localized location based on the output directory of the HTML files, but it breaks my image when I use it.
I have currently in my file
{{file:/images/picture.png}}
I expect the system to be able to transfer the file between computers but it registers to an absolute link and also does not include the image directory in the output directory of the vimwikiAll2HTML command.
I know this is an old question, but try to use {{local:/images/picture.png}} instead. If you open :help vimwiki in Vim, you can find a part that says:
In Vim, "file:" and "local:" behave the same, i.e. you can use them with both
relative and absolute links. When converted to HTML, however, "file:" links
will become absolute links, while "local:" links become relative to the HTML
output directory. The latter can be useful if you copy your HTML files to
another computer.
I have an eclipse CDT C project on a windows machine with all files, inc. doxy file, encoded as UTF-8.
UTF-8 has been specified as encoding within the doxy file as well.
Yet the LaTex files produced are encoded in ISO-8859-1.
In fact if I open a .tex file (with TexWorks), change the file encoding, save it and close it, when I re-open it the encoding is still marked as ISO-8859-1.
This means that UTF-8 symbols (such as \Delta) in the source make it through a doxygen build OK, but cause the PDF make to fail.
Im not at all familiar with LaTex, so not sure where to even start searching on this one, Google queries to date have been fruitless. I'm also still not overly sure if this is a Doxygen, Tex or windows issue that causes the .tex file encoding to be ISO-8859-1!
Thus it would be good to know that, even though there's no specific option for setting doxygen .tex output encoding, would it be set to the same as the DOXYFILE_ENCODING setting?
Assuming that is the case, then moving one of the .tex files from the project folder to the desktop and attempting the encoding change via TexWorks still fails to hold, so it leads me to think either windows or TexWorks is preventing the encoding being UTF-8, but lack of knowledge on encodings and LaTex has left me at a loose end here, any suggestions on what to try next?
Thanks
:\ I basically just ended up re-installing everything and making sure git ignored the tex files and handled the PDF files separately to the code files, so that the encoding was forced. Not really a fix, but it builds.
We've got a system that takes in a large variety of PDFs from unknown sources, and then uses them as templates for new PDFs generated by Prawn.
Occasionally some PDFs don't work as templates for Prawn- they either trigger a generic Prawn error ("Prawn::Errors::TemplateError => Error reading template file. If you are sure it's a valid PDF, it may be a bug.") or the resulting PDF comes out malformed.
(It's a known issue that some PDFs don't work as templates in Prawn, so I'm not trying to address that here:
[1]
[2])
If I take any of the problematic PDFs, and manually re-save them on my Mac using Preview > Save As [new PDF], I can then always use them as Prawn templates without any problem.
My question is, is there some (open source) server-side utility I can use that might be able to do the same thing- i.e. process problematic PDFs into something Prawn can use?
Yarin, it at least partially depends on why the PDFs don't work in the first place. If you can use them after re-saving with Apple's (quite bad) preview PDF code, you should be able to get the same result using a number of different tactics:
-) Use an actual PDF library to open and save the PDF files (libraries from Adobe and Global Graphics come to mind). These are typically commercial products but (I know the Adobe library the best) they do allow you to open a file and save it, performing a number of optimisations in the process. The Adobe libraries are currently licensed through a company called DataLogics (http://www.datalogics.com)
-) Use a commercial product that embeds these libraries. callas pdfToolbox comes to mind (warning, I'm affiliated with this product). This basically gives you the same possibilities as the previous point, but in a somewhat easier to use package (command-line use for example).
-) Use an open source product. I'm not very well positioned to provide useful links for that.
There is another approach that may work depending on your workflow and files. In graphic arts bad files are sometimes "made better" by a process called re-distilling; you basically convert the PDF file to PostScript and re-distill the postscript into PDF again. Because this rewrites the whole file structure, it often fixes fundamental problems. However, it also comes with risks as you're going through a different file format. Libraries such as GhostScript (watch the licensing conditions) may allow you to do this.
Given that your files seem to be fixed simply by using preview, I would think a redistilling approach would be overly dangerous and overkill. I would look into finding a good PDF library that can automatically open and save your files.
Currently I am using Ghostscript to merge a list of PDFs which are downloaded. The issue is if any 1 of the pdf is corrupted, it stops the merging of the rest of the pdfs.
Is there any command which i must use so that it will skip the corrupted pdfs and merge the others?
I have also tested with pdftk but facing the same issue.
Or is there any other command line based pdf merging utility that I can use for this?
You could try MuPDF, you could also try using MUPDF 'clean' to repair files before you try merging them. However if the PDF file is so badly corrupted that Ghostscript can't even repair it that probably won't work either.
There is no facility to ignore PDF files which are so badly corrupted they can't even be repaired. Its hard to see how this could work in the current scheme, since Ghostscript doesn't 'merge' files anyway, it interprets them, creating a brand new PDF file from the sequence of graphic operations. When a file is badly enough corrupted to provoke an error we abort because we may have already written any parts of the file we could, and if we tried to ignore and continue both the interpreter and the output PDF file would be in an indeterminate state.
I need to split PowerPoint presentation file (pptx and, if possible, ppt) into a set of original format files (pptx or ppt) – each containing one slide from the original. I need to do this programmatically on Linux Ubuntu server using free tools or external free API. When a file gets uploaded to a directory program will be called from my main program (written in PHP) and do the split.
I am looking for suggestions about language or set of tools to use. I looked at several options listed below. It will take some time to try all of them but if anyone could exclude or add to the list and/or provide code examples it would help.
Thanks!
(1) Apache POI project (POI-XSLF)
(2) OpenOffice unoconv command line utility
(3) C# (with compiler Mono for Linux). This may include indirect option of deleting slides with powerPoint.Slides(x).Delete
(4) JODConverter (Java OpenDocument Converter)
(5) PyODConverter (Python OpenDocument Converter)
(6) Google Documents API
(7) Aspose.Slides for .NET is out because of cost
When I had the same needs I ended up shelling and using "UNOCONV" to convert the files to PDF. And then used "PDFTK" to split the file by pages. Once that is done you should be able to take the extra step and convert the new split PDF files back to PPTX using one more UNOCONV.
While it seems rather complicated, PPTX seems to be "that one ooxml file no one wants to touch". Libraries seem to be few and incomplete mostly.