I'm learning how to addLtv from this link
my workflow is for 2 signers:
signer 1:
prepare empty signature (with NO_CERTIFIED certification level)
generate hash and get p7s
inject p7s to pdf for signer 1
addLtv for signer 1 ---> I am not sure on this step
signer 2:
prepare empty signature from the pdf that already signed by signer 1 (with NO_CHANGES_ALLOWED certification level)
generate hash and get p7s for signer 2
inject p7s to pdf for signer 2
addLtv for signer 2 ---> I am not sure on this step
in this workflow, the certificate for signer 2 will be invalid, any idea how to addLtv for this workflow?
Is it common practice if I change the last signer to NO_CERTIFIED so his certificate will not invalid and I add another signer (like just a timestamp) to make it NO_CHANGES_ALLOWED? or is there any better way to accomplish this workflow?
There are two problems in your code that correctly make Adobe Reader report issues. I'm not sure yet, though, whether after fixing them Adobe Reader will be happy.
The issues in your work flow are:
Certification signature as second signature
PAdES ESIC extension only declared after first signature
Certification signature as second signature
First of all, your code applies a normal approval signature (PdfSigner.NOT_CERTIFIED set by signer.SetCertificationLevel) as first signature and then a certification signature (PdfSigner.CERTIFIED_NO_CHANGES_ALLOWED set by signer.SetCertificationLevel) as second one.
This is not allowed. According to the PDF specification ISO 32000-1:
A PDF document may contain [...]
At most one certification signature (PDF 1.5). [...] The signature dictionary shall contain a signature reference dictionary (see Table 253) that has a DocMDP transform method. [...]
A document can contain only one signature field that contains a DocMDP transform method; it shall be the first signed field in the document.
(sections 12.8.1 and 12.8.2.2.1 of ISO 32000-1)
And according to ISO 32000-2:
A PDF document may contain the following standard types of signatures: [...]
One or more approval signatures (also known as recipient signatures). These shall follow the certification signature if one is present.
(section 12.8.1 of ISO 32000-2)
Either way, approval signatures must follow the certification signature, not precede it.
(The change between the specification versions most likely has been made to allow document time stamps to precede the certification signature.)
Thus, already immediately after the second signature had been applied, Adobe Reader should have complained!
There is a way, though, to change the certification level in a later approval signature: If your PDF validator supports ISO 32000-1 with Adobe Supplement with ExtensionLevel 3 or ISO 32000-2, FieldMDP transforms can be used to this effect.
Please read this answer for some information on this option.
PAdES ESIC extension only declared after first signature
Your first signature is applied without the document declaring that any PDF extensions apply. Thus, a PDF validator may assume that for signature validation the base ISO 32000-1 rules apply, in particular that a certification level of "no changes allowed" indeed means that no changes are allowed. Only if an appropriate extension is declared and the PDF validator supports it, this rule may differ. In particular
ESIC extension level 1 (as per ETSI TS 102 778-4 and EN 319 142-1) or
ADBE extension level 5 (as per ETSI TS 102 778-4) or
ADBE extension level 8 (as per ETSI EN 319 142-1)
should indicate that, as ISO 32000-2 meanwhile puts it,
Changes to a PDF that are incremental updates which include only the data necessary to add DSS’s and/or document timestamps to the document shall not be considered as changes to the document
The LTV adding code of iText in your PDFs then declares an ESIC extension level 5. I'm not sure where that comes from, whether there is another TS or EN mentioning that level or whether the ESIC and ADBE levels have been mishmashed.
Thus, already with your first signature you should declare one of the extensions mentioned above.
If document is your PdfDocument instance before or while you apply the first signature (you may have to retrieve it from your PdfSigner signer using signer.GetDocument()), you can declare an extension like this:
PdfDeveloperExtension extension = new iText.Kernel.Pdf.PdfDeveloperExtension
(PdfName.ESIC, PdfName.Pdf_Version_1_7, 1);
document.GetCatalog().AddDeveloperExtension(extension);
Alternatively your should set your PDF version to 2.0 when signing. This may cause other issues, though.
Related
I am trying to verify a signature of a file using my windows 10 and I believe I might have reached an egg and chicken problem, looking forward some pros advices.
I was trying to verify the signature of the Maven binary (https://maven.apache.org/download.cgi), so I found this documentation https://infra.apache.org/release-signing#verifying-signature
However, I am using Windows 10, that do not come built in with the gpg: 'gpg' is not recognized as an internal or external command.
So I need to download the GNUPG so that I can use it to verify the signature of my Maven binary.
However, to install the GNUPG (https://www.gnupg.org/download/index.html) I should also verify the .sig file from the GNUPG.
Does anyone know how can I do the verification of the GNUPG file using any Windows 10 built in command line? Or the most advisable strategy?
Thank you a lot
Regards
Personally I think you have to draw the line somewhere.
For me, I would either, compile GPG from source (where if you wish you can/others can audit the code), or use the published SHA-1 (not sure why they still use SHA-1) hashes:
d928d4bd0808ffb8fe20d1161501401d5d389458 gnupg-2.2.27.tar.bz2
9f2ff2ce36b6537f895ab3306527f105ff95df8d gnupg-w32-2.2.27_20210111.exe
5e620d71fc24d287a7ac2460b1d819074bb8b9bb libgpg-error-1.42.tar.bz2
6b18f453fee677078586279d96fb88e5df7b3f35 libgcrypt-1.9.3.tar.bz2
740ac2551b33110e879aff100c6a6749284daf97 libksba-1.5.1.tar.bz2
ec4f67c0117ccd17007c748a392ded96dc1b1ae9 libassuan-2.5.5.tar.bz2
3bbd98e5cfff7ca7514ae600599f0e1c1f351566 ntbtls-0.2.0.tar.bz2
f9d63e9747b027e4e404fe3c20c73c73719e1731 npth-1.6.tar.bz2
b8b88cab4fd844e3616d55aeba8f084f2b98fb0f pinentry-1.1.1.tar.bz2
5ae07a303fcf9cec490dabdfbc6e0f3b8f6dd5a0 gpgme-1.15.1.tar.bz2
3f8a0ba9c7821049d51b982141a2330a246beb55 scute-1.7.0.tar.bz2
61475989acd12de8b7daacd906200e8b4f519c5a gpa-0.10.0.tar.bz2
e708d4aa5ce852f4de3f4b58f4e4f221f5e5c690 dirmngr-1.1.1.tar.bz2
a7d5021a6a39dd67942e00a1239e37063edb00f0 gnupg-2.0.31.tar.bz2
13747486ed5ff707f796f34f50f4c3085c3a6875 gnupg-1.4.23.tar.bz2
d4c9962179d36a140be72c34f34e557b56c975b5 gnupg-w32cli-1.4.23.exe
Then, from there on in, you can retrospectively verify the signature.
You're right to a degree it becomes a chicken and egg problem, which is a recurring theme in cryptographic engineering, again, whereby, you have to draw the line somewhere.
I mean, are you going to be able to verify that the p and q primes used by GPG's private key (that's signing the binaries) have been validated using a correct implementation of Miller-Rabin primality test?
Or should it be an elliptic curve based key that the entropy used to generate the private scalar was high? ...
No! so don't worry too much, you're already an order of magnitude beyond the average user's OpSec.
I’m aware that .asc signatures are output as a text file, while .sig & .gpg are binary.
That aside:
Are .sig and .gpg the same file with different extensions? If not, why use one over the other?
Between text files and binary files, what are the relative advantages? Security, efficiency, compatibility, etc.
Are .sig and .gpg the same file with different extensions?
No, they are different file in the context of GnuPG.
.gpg - GNU Privacy Guard public keyring file, binary format. See examples from 4.2 Configuration files
.sig - GPG signed document file, binary format.
.asc - ASCII-armored signature with or without wrapped document, plain text format. Usually used in clearsigned documents. Usually it's attached unmodified original doc and its signature. In the usage of detached signatures, you can generate signature only without original doc via --detach-sig.
If not, why use one over the other?
Good question! Since OpenPGP is an open standard (RFC 4880), its section 6 provides detailed explanation, I just quote the key part:
In principle, any printable encoding scheme that met the requirements
of the unsafe channel would suffice, since it would not change the
underlying binary bit streams of the native OpenPGP data structures.
The OpenPGP standard specifies one such printable encoding scheme to
ensure interoperability.
I will use this answer as reply of Pros and Cons of binary vs ASCII format.
Without any encryption, if the recipient has the serialized Protobuf file but does not have the generated Protobuf class (they don't have access to the .proto file that define its structure), is it possible for them to get any data in the Protobuf file from the binary?
If they have access to a part of the .proto file (for example, just one related message in the file) can they get a part of that data out from the entire file while skipping other unknown parts?
yes, absolutely; the protoc tool can help with this (see: --decode_raw), as can https://protogen.marcgravell.com/decode - so it should not be treated as "secure" at all
yes, absolutely - that's a key part built into the protocol that allows messages to be extensible such that they can decode the bits they understand and either ignore or just store (for round-trip or "extension" fields) the bits they don't understand
protobuf is not a security device; to someone with the right tools it is just as readable as xml or json, with the slight issue that it can be uncertain how to interpret some values; but: you can infer and guess and reverse engineer
Ok, I have found this page https://developers.google.com/protocol-buffers/docs/encoding
The message discards all the names and is just a pair of key number and values. The generated class might offer some protection for safely reading these data and could not read unknown data. (Sure enough because the generated class was generated from known structure, .proto file)
But if I am an attacker I could reference that Encoding page and try to figure out which area in the binary corresponds to which data. For example, varint might be easy to spot after changing some data. And proceed to write my own .proto file to attack this unknown data or even a custom binary reader that can selectively read part of the binary.
This actually breaks down into a lot of separate questions to understand the overall process.
From what I understand a JWT is just three JSON objects encoded into base64 separately from one another. Then the Base64 strings are separated by periods. This is done purely for "shorter message" purposes?
These include a header, "payload," and signature. The header and payload are 100% available to read by anyone who intercepts them. They are just base64 strings that can be decoded into JSON and read.
Then the MAGIC: The server receives the SIGNATURE, which cannot be decoded. The signature is actually a hash of the header, payload, AND a secret key. So the server takes the header, the payload, and ITS OWN secret key, and makes a hash. If this hash MATCHES the signature that came with the message, the message is trusted. If the signatures DO NOT match, the message is invalid.
My problem with all this? Where are the two separate keys here? It seems that the key used to encrypt the message and the key used to decrypt the message are the same. This is the root of my question - if you answer nothing else, please help with this.
Other than that, I wonder if I understand the process correctly? Also, where is the standard "agreeing on a public key" and then trading "mixtures" of public/private keys occurring here? All I see is the same key being used to encode/decode. But when did the agreement happen? Viewing this in context of .NET and Auth0 btw, but overall q.
Random stuff I watched/read/used if anyone is interested on seeing this q later:
Summary of JWTs: https://scotch.io/tutorials/the-anatomy-of-a-json-web-token
Public-key/Assymetric Cryptography: https://youtu.be/3QnD2c4Xovk
Hashing: http://www.webopedia.com/TERM/H/hashing.html
Base64: http://en.wikipedia.org/wiki/Base64
Firstly, JSON Object Signing and Encryption standards (JOSE) use base64url encoding and not straight base64 encoding, which differs slightly.
JWT header and payload are JSON objects but the signature is not, that's a base64url encoded binary blob
the whole JWT is available to anyone who intercepts it, all 3 parts of it
you're describing a symmetric key algorithm, where sender and receiver use the same shared key; that is just one option for JWTS, another option is to use public/private key pairs for signing/validation/encryption/decryption
As with all crypto, agreement on keys needs to happen out of band.
Then the MAGIC: The server receives the SIGNATURE, which cannot be decoded. The signature is actually a hash of the header, payload, AND
a secret key. So the server takes the header, the payload, and ITS OWN
secret key, and makes a hash. If this hash MATCHES the signature that
came with the message, the message is trusted. If the signatures DO
NOT match, the message is invalid.
There is no magic here. JWT supports four well-known signature and MAC (message authentication code) constructions: HMAC (a symmetric algorithm), and ECDSA, RSASSA-PKCS-v1.5 and RSASSA-PSS (public-key algorithms). Each of these may be used with the SHA-256, SHA-384 or SHA-512 cryptographic digest. See also the table of Cryptographic Algorithms for Digitial Signatures and MACs from RFC 7518 - JSON Web Algorithms (JWA).
My problem with all this? Where are the two separate keys here? It
seems that the key used to encrypt the message and the key used to
decrypt the message are the same. This is the root of my question - if
you answer nothing else, please help with this.
There are not necessarily two separate keys - if a public key algorithms is used, the signature will be created using the server's private key, and verified using the corresponding public key. But if an HMAC algorithm is used, a shared secret key must be used for both signing and verification.
I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documentation:
http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx
PassStub: The encrypted novice computer's password string. When the Remote
Assistance Connection String is sent as a file over e-mail, to provide additional security, a
password is used.<16>
In part 16 they detail how to create as PassStub.
In Windows XP and Windows Server 2003, when a password is used, it is encrypted using
PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4
stream encryption algorithm.
As PassStub looks like this in the file:
PassStub="LK#6Lh*gCmNDpj"
If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP.
The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file).
The problem is that CryptEncrypt produces a binary output way larger than the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before.
Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=#^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes.
Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say.
Another idea I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the PassStub looks like.
I am curious, have you already:
considered using ISAFEncrypt::EncryptString(bstrEncryptionkey, bstrInputString) as a higher-level alternative to doing all the dirty work directly with CryptEncrypt? (the tlb is in hlpsvc.exe)
looked inside c:\WINDOWS\pchealth\helpctr\Vendors\CN=Microsoft Corporation,L=Redmond,S=Washington,C=US\Remote Assistance\Escalation\Email\rcscreen9.htm (WinXP) to see what is going on when you pick the Save invitation as a file (Advanced) option and provide a password? (feel free to add alert() calls inside OnSave())