generated docx with opentbs converted by unoconv and libreoffice - pdf-generation

For some reason I am expecting a strange behaviour.
When I am merging my docx template with opentbs, it works all fine and it looks correct in the generated docx.
But now I need to convert the docx into a pdf where I am using unoconv and libreoffice on mac OS X 10.11.
when I do this, all strings with multiple lines (which are displayed correctly in the docx) will be displayed as single line in the pdf.
Also if I open the generated docx with libreoffice, all multi line strings will be displayed as single line.
I figured out, that I can use ;strconv=no.
This will then do exactly the opposite. All multi line strings in the docx will be displayed as single line, but in libreoffice or converting to pdf with unoconv they are displayed correctly with multi lines.
anyone has a solution for this problem?

Related

OFM rwrun - generating different pdf format compare to printer output

I am using oracle report rwrun command in VC++ to generate report which directly goes to printer (destype=printer).
Now I want to get this same report in PDF file, for this I only changed destype=file and given file name in desname=xyz.pdf.
The issue i am facing here is PDF generated in 2nd case is in different format compare to print in 1st case. Please check below images for pdf format green box is content area.
1.Correct PDF format 2. Incorrect PDF format
I tried changing Orientation to Portrait but not getting desired format.
Please suggest the solution to get same PDF like I am getting in case of destype=printer

Convert a searchable PDF to searchable PDF/A using Ghostscript

I am using Ghostscript to convert PDF to PDF/A by command line:
gs -dPDFA -dBATCH -dNOPAUSE -sProcessColorModel=DeviceCMYK -sDEVICE=pdfwrite -sPDFACompatibilityPolicy=1 -sOutputFile="output.pdf" input.pdf
But output file lost searchable text property.
How can I obtain searchable PDF/A files as output ?
Thanks.
You haven't supplied an input file to look at, nor mentioned which version of Ghostscript you are using.
Let me start with my standard lecture on this subject; when you take a PDF file as input, and use Ghostscript's pdfwrite device to produce a new PDF file, you are NOT 'converting', 'editing' or 'modifying' the input file.
What happens is that the PDF interpreter interprets the PDF file, and produces a series of graphcs primitives, which it feeds to the graphics library. This then processes these primitives, and passes them to the device. The device then emits them to the output file. In the case of a rendering device (eg TIFF) it renders theoperation to a bitmap and when it reaches the end of file, it writes the bitmap as a file. In the case of pdfwrite, it re-assembles these primtives into a brand new PDF file.
So the output PDF file has nothing in common with the input PDF file, except its appearance.
There are disadvantages to this approach (it does limit us in preserving some non-printing aspects of the input file), but there are also advantages; for instance it permits us to alter colour spaces, flatten transparency, change font encodings etc.
In addition to this you have chosen to create a PDF/A file. PDF/A limits the available features of the PDF specification, and it may be (its impossible to tell without seeing the original file) that it simply isn't possible to represent the original PDF file as a PDF/A file without altering some aspects of it.
Again, without seeing the original file I can tell, but it may be that you simply cannot achieve what you want, or at least not using Ghostscript.

Convert docx to mediawiki and preserve [[Image:]]

Currently, I'm trying to move a docx to a mediawiki file and preserve the proper filenames in the [[Image:]] tags. For some reason, the proper image file gets swallowed (ie, normally it'd be media/image4.jpg, but instead it's just empty).
I've tried extracting the docx and looking at docx/word/_rels/document.xml.rels but I have no idea how to figure out what images are duplicated. I made a simple script to do some find/replace, but in one file I have 130 [[Image:]] tags and only 105 images.
As such, I would like to have the MediaWiki filter output the proper image name when doing this:
soffice --headless --convert-to txt:MediaWiki myfile.docx
I'm on ubuntu 14.10.
Is this possible?
This doesn't appear to be possible, but I have written a workaround found here that solves it. The long and short of it is that I convert the file and manage uploading / linking of images manually.

How to convert a source code text file (e.g. asp php js) to jpg with syntax highlight using command line / bash routine?

I need to create images of the first page of some source code text files, like asp or php or js files for example.
I usually accomplish this by typing a command like
enscript --no-header --pages=1 "${input_file}" -o - | ps2pdf - "${temp_pdf_file}"
convert -quality 100 -density 150x150 -append "${temp_pdf_file}"[0] "${output_file}"
trash "${temp_pdf_file}"
This works nice for my needs, but it obviously outputs an image "as is" with no "eye-candy" features.
I was wondering if there's a way to add syntax highlighting too.
This might come handy to speed up the creation of presentations of developed works for example.
Pygments is a source highlighting library which has PNG, JPEG, GIF and BMP formatters. No intermediate steps:
pygmentize -o jquery.png jquery-1.7.1.js
Edit: adding source code image to the document means you are doing it wrong to begin with. I would suggest LaTeX, Markdown or similar for the whole document and source code document could be generated.
Another easy/lazy way would be to create an html document using pygmentize and copy-paste it to the document. Not professional, but better than raster image.
Here's how I do it on my Mac:
I open up the file with MacVIM. MacVIM supports syntax highlighting.
I print the file to a PDF. This gives me a paged document with highlighted syntax.
When I print, The program Preview opens up to display the file. I can Export it to a jpg, or whatever my hearts desire.
I don't have a Mac
This works with Windows too.
You have to get VIM although Notepad++ may also work. Any program editor will support syntax highlighting and allow you to print out with the highlighted syntax. So, pick what you like.
You have to get some sort of PDF producing print driver such as CutePDF.
Converting it to a jpg. I think Adobe Acrobat may be able to export a PDF into a JPG, or maybe the print driver can print to a JPG instead of a PDF. Or, you can send it to a friend who has a Mac.

How to save text file in UTF-8 format using pdftotext

I am using pdftotext opensource tool to convert the PDF to text files. How can I save the text files in UTF-8 format so that I can retain all the accent characters in text files. I am using the below command to convert which extracts the content to text file but not able to see any accented characters.
pdftotext -enc UTF-8 book1.pdf book1.txt
Please help me to resolve this issue.
Thanks in advance,
You can get a list of available encodings using the command:
pdftotext -listenc
and pick the right one using the -enc argument. Mine here seems to do UTF-8 by default. i.e. your "UTF-8" is superflous
pdftotext -enc UTF-8 your.pdf
You may want to check your locale (LC_ALL, LANG, ...).
EDIT:
I downloaded the following PDF:
http://www.i18nguy.com/unicode/unicodeexample.pdf
and converted it on a Windows 7 PC (german) and XPDF 3.02PL5 using the command:
pdftotext.exe -enc UTF-8 unicodeexample.pdf
The text file is definitely UTF-8 encoded, as all characters are displayed correctly. What are you using the text file for? If you're displaying it through a web application, your content encoding might simply be wrong, while the text file has been converted as you wanted it to.
Double-check using either a browser (force the encoding in Firefox to ISO-8859-1 and UTF-8) or using a hex editor.
Things are getting a little bit messy, so I'm adding another answer.
I took the PDF apart and my best guess would be a "problem" with the font used:
open the PDF file in Acrobar Reader
select all the text on the page
copy it and paste it into a Unicode-aware text editor (there's no "hidden" OCR, so you're copying actual data)
You'll see that the codepoints you end up with aren't the ones you're seeing in the PDF reader. Whatever the font is, it may have a mapping different from the one defined in the Unicode standard. As such, your content is "wront" and there's not much you can do about it.

Resources