Is there a way to edit the raw text from a PDF without any special paid software?
So there are PDFs with highlightable text. I assume that the text is stored somewhere in the file.
I tried to just drag & drop a PDF into vscode but it just showed me unknown characters; even a little of meta text but if I edit the meta-infos, the file gets mostly corrupted.
Apart from that, I could not find any of the text contents of my desired PDF in vscode-editor.
Does someone know if there is a solution like inspecting and changing the source code somehow without a special software? I want to edit the contents; not the meta-infos.
(I use macOS)
The text you see on a pdf page can be constructed in dozens of different ways, actually there are millions of users, using potentially hundreds if not thousands of different methods.
Update
The question is MacOS but for native cross platform you need to work in mime text/pdf to be universally useful. But by way of example how thats possible specifically in windows its possible to write line by line using say cmd here is a snippet of what was a few dozen lines :-)
echo %%PDF-1.0>demo.pdf
echo %%µ¶µ¶>>demo.pdf
echo/>>demo.pdf
for %%Z in (demo.pdf) do set "FZ1=%%~zZ"
echo 1 0 obj>>demo.pdf
echo ^<^</Type/Catalog/Pages 2 0 R^>^>>>demo.pdf
echo endobj>>demo.pdf
echo/>>demo.pdf
For the fuller "Feature Creep"ing of now over more than a 100 lines and counting see
https://github.com/GitHubRulesOK/MyNotes/raw/master/MAKE-PDF.cmd
However although plain text could be the simplest it is rarely used except to prove a conceptual point that it is possible. The rest of the time "Special Software" as you call it (a pdf generator/editor) will be used to compress the file objects, most frequently as different optimal binary streams.
So some text may be scanned pixels whilst other text may be line shapes that look like letters, or at other times plain letters without fonts but a named style, or even letters with the font included (embedded) in the file (the preferred option).
In many ways each page may be built different to the others and thus no two pdfs generally will use the same structure unless like a bank statement using a format that does not change much from month to month, even if the balance wobbles about.
So in summary the tool that will work best is the one that covers every single permutation that Adobe dreamed of, and still keep the result a valid Adobe PDF.
Thus Acrobat PRO 3D is on my shelf (even if not used from one year to the next)
There are many cheaper editors and ones I will use more often for small mods are Tracker Xchange and FreePDF PRO and both have different limitations.
Your choices for MacOS will be more limited thus search for the best you are willing to pay for.
I am generating and storing PDFs in a database.
The pdf data is stored in a text field using Convert.ToBase64String(pdf.ByteArray)
If I generate the same exact PDF that already exists in the database, and compare the 2 base64strings, they are not the same. A big portion is the same, but it appears about 5-10% of the text is different each time.
What would make 2 pdfs different if both were generated using the same method?
This is a problem because I can't tell if the PDF was modified since it was last saved to the db.
Edit: The 2 pdfs visually appear exactly the same when viewing the actual pdf, but the base64string of the bytes are different
Two PDFs that look 100% the same visually can be completely different under the covers. PDF producing programs are free to write the word "hello" as a single word or as five individual letters written in any order. They are also free to draw the lines of a table first followed by the cell contents, or the cell contents first, or any combination of these such as one cell at a time.
If you are actually programmatically creating the PDFs and you create two PDFs using completely identical code you still won't get files that are 100% identical. There's a couple of reasons for this, the most obvious is that PDFs support creation and modification dates. These will obviously change depending on when they are created. You can override these (and confuse everyone else so I don't recommend this) using something like this:
var info = writer.Info;
info.Put(PdfName.CREATIONDATE, new PdfDate(new DateTime(2001,01,01)));
info.Put(PdfName.MODDATE, new PdfDate(new DateTime(2001,01,01)));
However, PDFs also support a unique identifier in the trailer's /ID entry. To the best of my knowledge iText has no support for overriding this parameter. You could duplicate your PDF, change this manually and then calculate your differences and you might get closer to a comparison.
Then there's fonts. When subsetting fonts, producers create a unique internal name based on the original name and an arbitrary selection of six uppercase ASCII letters. So for the font Calibri the font's name could be JLXWHD+Calibri one time and SDGDJT+Calibri another time. iText doesn't support overriding of this because you'd probably do more harm than good. These internal names are used to avoid font subset collisions.
So the short answer is that unless you are comparing two files that are physical duplicates of each other you can't perform a direct comparison on their binary contents. The long answer is that you can tweak some of the PDF entries to remove unique parts for comparison only but you'd probably be doing more work than it would take to just re-store the file in the database.
Full disclosure: I'm working on my libui GUI framework's text API. This wraps DirectWrite on Windows, Core Text on OS X, and Pango (which uses HarfBuzz for OpenType shaping) on other Unixes. One of the text formatting attributes I want to specify is a collection of OpenType features to use, which all three provide; DirectWrite's is IDWriteTypography.
Now, when you draw some text with these libraries, by default you'll get a few useful OpenType features enabled, such as the standard ligatures (liga) like the f+i ligature. I thought this was font-specific, but it turns out this is specific to the script of the text being shaped. Microsoft provides guidelines for all the scripts supported by OpenType (under "Script-specific Development"), and I can see rather complex logic for doing it all in HarfBuzz itself to confirm it.
On Core Text and Pango, if I enable other attributes, they'll be added on top of these defaults. But with DirectWrite, in particular IDWriteTextLayout::SetTypography(), doing so removes the defaults:
The program that produces this output is can be found here.
Obviously my first option would be to ask how to get the default features on DirectWrite. Someone did so already on this site, though, and the answer seems to be "no".
I am guessing that DirectWrite is allowing me to be in complete control of the list of features to apply to some text. This is nice, except that I can't do this with the other APIs unless I explicitly disable the default features somehow! Of course, I don't know if this list will ever change, so hardcoding it might not be the best idea.
Even if hardcoding is an option, I could just grab HarfBuzz's list for each script, but a) it's rather complicated b) there are multiple possible shapers for a script, depending on (I think) version compatibility (for instance, Myanmar).
So why not use HarfBuzz's lists to recreate the default list of features for DirectWrite anyway? It seems to want to be accurate to other shapers anyway, so this should work, right? Well I would need to do two things: figure out what script to use, and figure out which attributes to use on which characters for script where the position of a character in the word matters.
DirectWrite provides an interface IDWriteTextAnalyzer that provides facilities to perform shaping. I could use this, but it seems the script data is returned in a DWRITE_SCRIPT_ANALYSIS structure, and the description for the script ID says "The zero-based index representation of writing system script.".
This doesn't help, so I wrote a program to just dump the script numbers for text I type in. Running it on the input string
لللللللللللللاااااااااالا abcd محمد ابن بطوطة Отложения датского яруса
yields the output
0 - 26 script 3 shapes 0
26 - 5 script 49 shapes 0
31 - 14 script 3 shapes 0
45 - 2 script 1 shapes 1
47 - 25 script 22 shapes 0
I cannot match these script numbers to anything in any of the Windows headers: if there is a defined number for Arabic, Latin, or Cyrillic in any API, they don't match these. And even if I did get a mapping between script and script number, that still doesn't give me the data to apply intra-word features.
What about Uniscribe? Well, the documentation for the equivalent SCRIPT_ANALYSIS type says that its script ID is an "[opaque] value" whose "value for this member is undefined and applications should not rely on its value being the same from one release to the next". And while I can get a language code to identify the script by, there's still no defined value other than LANG_ENGLISH for "Western" (Latin?) scripts. Are the DirectWrite values the same as the Uniscribe ones? And it seems like I can at least figure the initial and final states of words by looking at the fLinkBefore and fLinkAfter fields, but is this enough to properly apply attributes per-script?
HarfBuzz does have an experimental DirectWrite backend that isn't intended to be used by real programs; I'm not yet sure whether it has the same feature-clobbering I specified above. If I find out, I'll update this part here.
Finally, if I enter the following equivalent test case to the first one above in something like kaxaml:
<Page
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<Grid>
<FlowDocumentPageViewer>
<FlowDocument FontFamily="Constantia" FontSize="48">
<Paragraph>
afford afire aflight 1/4<LineBreak/>
<Run Typography.Fraction="1">afford afire aflight 1/4</Run>
</Paragraph>
</FlowDocument>
</FlowDocumentPageViewer>
</Grid>
</Page>
I see the ligatures being applied properly, even in the latter case:
(The fraction at the end is just to prove that that attribute is being applied.) If I assume XAML uses DirectWrite, then that proves my first option (simply overlaying my custom attributes on top of the defaults) should be possible... (I make this assumption based on the idea that XAML provides a strikingly similar API to Direct2D for drawing 2D graphics, and has a lot of holes filled in where I had to manually write a lot of glue code to do the same things with vanilla Direct2D, so I assume whatever is possible in XAML is possible with Direct2D, and by extension DirectWrite since they were technically introduced together...)
At this point I'm completely lost. I want to at least be predictable across platforms, and I'm not sure how programs are even supposed to, let alone going to, use OpenType features directly or not anyway. Am I making bad expectations of text layout APIs? Will I have to drop IDWriteTextLayout and do all the text shaping and layout myself if I want this?
Or do I have to drop vanilla Windows 7 support and upgrade to the Platform Update DirectWrite feature set? Or even Windows 7 entirely?
After some discussions with Peter Sikking and Ebrahim Byagowi, I went and debugged a more general-purpose program I built quickly to test things, and I figured out what's going on internally.
First, however, I will say this applies to Uniscribe and DirectWrite equally.
As it turns out, DirectWrite is always providing a set of default OpenType features, regardless of what feature set I use! The situation is that the list of default features provided differs depending on whether I load my own features or not, and depending on the shaping engine. For the latn script in horizontal writing mode and for English, this is done with the "generic engine".
If I don't provide any features, the generic engine will load script-specific features. For horizontal latn, this list is
locl
ccmp
rlig
rclt
calt
liga
clig
If I do provide features, the generic engine will use the same default list for all scripts:
locl
ccmp
rclt
rlig
mark
mkmk
dist
So I don't know what to do about this. I could probably just provide liga and a few others myself in libui code (marked as a HACK of course), but this is still weird. I'm not sure what the motivation is either. Either way, this explains the behavior I'm seeing.
Supposing your question in general is about programming or at least concerns programming, I will try and give answers to some of your interrogative sentences.
would I have to drop the use of IDWriteTextLayout entirely in my code if I want to be able to add typographical features on top of the defaults?
It depends. If an IDWriteTextLayout interface suits well your project tasks in all ways except ease of variation of DirectWrite default typographic features, learn what you should about typography and create an IDWriteTypography instance suitable for your needs. Developing a custom text layout for the program may require substantial time and effort, especially if the program is supposed to render bidirectional texts, complex scripts, inline objects, etc.
It may happen that the tasks of your project require to develop a text layout engine for reasons other than just controlling typographic features used in rendered text. For example, your manager/customer may ask for implementation of customized linebreaking opportunities or a glyph advance justification algorithm. In this scenario, you will implement an IDWriteTextAnalizer::GetGlyphs method. This method has parameters DWRITE_TYPOGRAPHIC_FEATURES ** features, const UINT32 * featureRangeLengths, UINT32 featureRanges, and this parameters enable you to supersede a set of "default" typography features for a range of the text to be rendered (see my answer to the other question What are the default typography settings used by IDWriteTextLayout?). Only affected features will be altered; the other features has their "default" values. Morever, if you omit this parameters in a GetGlyphs call for the next text range (for example, use values of NULL, NULL, 0), the features altered in the previous GetGlyphs call will not be altered by the call for this next range.
the documentation for the equivalent SCRIPT_ANALYSIS type says that its script ID is an "[opaque] value" whose "value for this member is undefined and applications should not rely on its value being the same from one release to the next". And while I can get a language code to identify the script by, there's still no defined value other than LANG_ENGLISH for "Western" (Latin?) scripts.
Strictly speaking, this is not an interrogative statement, but I guess you are dissatisfied with how these Unicode script IDs are defined and how one can use the API with so vaguely defined structures and constants.
It may be off topic, but I risk to hypothesize on the origin of the "Unicode script ID" values. As of 2010-07-17, the Unicode, Inc. published The Unicode 6.0 version. The standard contained the document
http://www.unicode.org/Public/6.0.0/ucd/PropertyValueAliases.txt, with a section containing a list of scripts. The list went so:
# Script (sc)
sc ; Arab ; Arabic
sc ; Armi ; Imperial_Aramaic
etc.
The Arabic script is #1, the Cyrillic script is #20, the Latin script is #47 in this list. Furthermore, elsewhere I saw this list starting with scripts Common and Inherited. It places the Arabic script to the 3rd, the Cyrillic to the 22nd, and the Latin to the 49th place. These ordinals are familiar to you, aren't they?
Fortunately, we need not rely on the "Unicode script ID" values; we need script properties, not script IDs or abbreviations. The API is self-consistent in that it gives actual script properties for the text range, when we pass to a GetScriptProperties method the number derived from an AnalyzeScript call.
I've got three bash scripts in three different sibling directories.
The first few lines of each do some setup, different between each one.
The last twenty or so lines of the scripts are character for character identical, processing and comparing the files constructed in the first bit.
What I'd like to do is to put the last twenty lines in, say ../common.bash, and do something like
#include "../common.bash"
in each of the three scripts, so as to avoid having to make the same changes in three places every time I fiddle.
So far my best guess is to use cat to construct the scripts out of the four morally-independent pieces.
Is there a better way?
Use the source.
source /path/to/common.bash
You shouldn't use a relative path, because it will be interpreted relative to the user's working directory, not the location of the script.
Use meld
source is probably the answer I wanted, but actually in this case I've found that it's best to use meld to view the three files side by side, and to use meld to propagate favourite changes.
The advantage is that when working on one file, I can see the whole thing at once.
But it won't scale to the inevitable fourth copy, so at that point I'll use source, I guess.
Looking for tips and/or tools on how to efficiently work with gettext PO files when making small edits to large msgid values.
Example: We have lots of multi-sentence/multi-paragraph messages that are stored in our PO message catalog files. If we make a very minor change to a message, perhaps editing a single sentence or even correcting punctuation, we lose our original translation when we run the msgmerge utility.
Rather than re-translate long messages (that have already gone through an editorial approval process) from scratch, our translators return to backup copies of their PO files and manually search for the text of the last msgid/msgstr translation pair which they then diff against the current msgid values to see what has changed, followed by a copy and paste of the last translation which they then edit to reflect the updated msgid value.
That's a lot of work! Certainly there must be a better way of handling this type of workflow?
Is there a best practice way to archive and find previous translations that are no longer in a PO file? One idea that comes to mind is to store a unique msg id in the text of our messages or in the comments that precede our message and use this id to retrieve previous msgid/msgstr translation pairs for review. Or are there PO editors or online services that make this process more efficient?
Thank you,
Malcolm
I've been looking for a way to make minor changes to msgids without disturbing existing translations - for instance, typo fixes in the source text. Here's a recipe I've just worked out that doesn't involve websites:
Use msgen from GNU gettext to generate an English-to-English po file:
msgen project.pot >corrections.po
Manually edit the msgstrs in "corrections.po" to reflect the typo fixes made in the source text, so we have a mapping from uncorrected to corrected strings. (I haven't thought about how to automate this bit.)
For each "real" translation (for example ca.po): abuse poswap from the Translate Toolkit (translate-toolkit in Ubuntu) to change the msgids:
poswap -i corrections.po -t ca.po -o ca.new.po
This does seem to lose header comments and obsolete strings from GNU gettext po files, but manually fixing those up is much less work than manually tweaking msgids in each translation (and could probably easily be scripted).
(Obviously, this should only be used in exceptional circumstances, where you're absolutely sure that none of the translators need the opportunity to re-review their translations.)
Virtaal's translation memory support can probably help with this. If your original units are in the translation memory, it will be shown (with differences) within a certain margin of change (based on Levenshtein distance). It will still contain the original (unmodified) translation, but at least the original text is more easily accessible and the differences highlighted.
I'm not 100% sure, but Pootle might also offer a web based solution. If you need any help, ask in #pootle on FreeNode.
The more general improvement is, of course, to separate/segment the units as far as possible.