Confused on how to do certificate pinning. How do we instll the certificate in n android or ios device through xmarin forms. Is this done during installation if application? There are some tutorils on how to use validation of https requests using pinning but nothing bout installation of the public certificate?
Another way to go is to perform pinning on the leaf's certificate public key and in this simple demo class we can see how to do it when using the HttpClient by customizing the ServicePointManager:
using System;
using System.Net;
using System.Net.Security;
using System.Security.Cryptography.X509Certificates;
namespace ApproovSDK
{
/**
* Service point configuration.
*
* Adds simple pinning scheme to service point manager.
*
* FOR DEMONSTRATION PURPOSES ONLY
*/
public static class ServicePointConfiguration
{
private static string PinnedPublicKey = null;
public static void SetUp(string key = null)
{
PinnedPublicKey = key;
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
ServicePointManager.ServerCertificateValidationCallback = ValidateServerCertficate;
}
private static bool ValidateServerCertficate(
object sender,
X509Certificate certificate,
X509Chain chain,
SslPolicyErrors sslPolicyErrors
)
{
if (PinnedPublicKey == null || PinnedPublicKey.Length <= 0) return true;
//Console.WriteLine("Expected: " + PinnedPublicKey);
//Console.WriteLine("Found : " + certificate?.GetPublicKeyString());
return String.Equals(PinnedPublicKey, certificate?.GetPublicKeyString(),
StringComparison.OrdinalIgnoreCase);
}
}
}
The above example was written for demo purpose and a better implementation should associate multiple keys for each domain being called.
While you can use the certificate itself in order to perform your validation and thus pin the certificate there are alternative options.
As per the OWASP documentation here you can implement any of the following 3 approaches:
Certificate
The certificate is easiest to pin. You can fetch the
certificate out of band for the website, have the IT folks email your
company certificate to you, use openssl s_client to retrieve the
certificate etc. When the certificate expires, you would update your
application. Assuming your application has no bugs or security
defects, the application would be updated every year or two. At
runtime, you retrieve the website or server's certificate in the
callback. Within the callback, you compare the retrieved certificate
with the certificate embedded within the program. If the comparison
fails, then fail the method or function.
There is a downside to pinning a certificate. If the site rotates its
certificate on a regular basis, then your application would need to be
updated regularly. For example, Google rotates its certificates, so
you will need to update your application about once a month (if it
depended on Google services). Even though Google rotates its
certificates, the underlying public keys (within the certificate)
remain static.
Public Key
Public key pinning is more flexible but a little trickier due to the
extra steps necessary to extract the public key from a certificate. As
with a certificate, the program checks the extracted public key with
its embedded copy of the public key. There are two downsides two
public key pinning. First, its harder to work with keys (versus
certificates) since you usually must extract the key from the
certificate. Extraction is a minor inconvenience in Java and .Net,
buts its uncomfortable in Cocoa/CocoaTouch and OpenSSL. Second, the
key is static and may violate key rotation policies.
Hashing
While the three choices above used DER encoding, its also acceptable
to use a hash of the information (or other transforms). In fact, the
original sample programs were written using digested certificates and
public keys. The samples were changed to allow a programmer to inspect
the objects with tools like dumpasn1 and other ASN.1 decoders.
Hashing also provides three additional benefits. First, hashing allows
you to anonymize a certificate or public key. This might be important
if you application is concerned about leaking information during
decompilation and re-engineering.
Second, a digested certificate fingerprint is often available as a
native API for many libraries, so its convenient to use.
Finally, an organization might want to supply a reserve (or back-up)
identity in case the primary identity is compromised. Hashing ensures
your adversaries do not see the reserved certificate or public key in
advance of its use. In fact, Google's IETF draft websec-key-pinning
uses the technique.
I would strongly recommend using the Hashing approach meaning that when you validate the certificate coming in you just need to check that the hash of the certificate coming from the server is the one(s) that you expect. Something like the following:
private bool ValidateServerCertificate(object sender,
X509Certificate certificate,
X509Chain chain,
SslPolicyErrors sslPolicyErrors)
{
// Make sure we have a certificate to check.
if (certificate == null)
{
return false;
}
if (sslPolicyErrors != SslPolicyErrors.None)
{
return false;
}
return this.KnownKeys.Contains(certificate.GetCertHashString(),
StringComparer.Ordinal);
}
Where KnownKeys is a simple compile time defined array of your known certificate hashes:
private readonly string[] KnownKeys = new[]
{
"INSERT HASH",
"AND A SECOND IF REQUIRED"
};
Related
I'm trying to upload a file to S3 and have it encrypted using the SSE-C encryption options. I can upload without the SSE-C options, but when I supply the sse_customer_key options I'm getting the following error:
ArgumentError: header x-amz-server-side-encryption-customer-key has field value "QkExM0JGRTNDMUUyRDRCQzA5NjAwNEQ2MjRBNkExMDYwQzBGQjcxODJDMjM0\nnMUE2MTNENDRCOTcxRjA2Qzk1Mg=", this cannot include CR/LF
I'm not sure if the problem is with the key I'm generating or with the encoding. I've played around with different options here, but the AWS documentation is not very clear. In the general SSE-C documentation it says you need to supply a x-amz-server-side-encryption-customer-key header, which is described as this:
Use this header to provide the 256-bit, base64-encoded encryption key
for Amazon S3 to use to encrypt or decrypt your data.
However, if I look at the Ruby SDK documentation for uploading a file the 3 options have a slightly different description
:sse_customer_algorithm (String) — Specifies the algorithm to use to when encrypting the object (e.g.,
:sse_customer_key (String) — Specifies the customer-provided encryption key for Amazon S3 to use in
:sse_customer_key_md5 (String) — Specifies the 128-bit MD5 digest of the encryption key according to RFC
(I didn't copy that wrong, the AWS documentation is literally half-written like that)
So the SDK documentation makes it seem like you supply the raw sse_customer_key and that it would base64-encode it on your behalf (which makes sense to me).
So right now I'm building the options like this:
sse_customer_algorithm: :AES256,
sse_customer_key: sse_customer_key,
sse_customer_key_md5: Digest::MD5.hexdigest(sse_customer_key)
I previously tried doing Base64.encode64(sse_customer_key) but that gave me a different error:
Aws::S3::Errors::InvalidArgument: The secret key was invalid for the
specified algorithm
I'm not sure if I'm generating the key incorrectly or if I'm supplying the key incorrectly (or if it's a different problem altogether).
This is how I'm generating the key:
require "openssl"
OpenSSL::Cipher.new("AES-256-CBC").random_key
Oh, did you notice that your key contains '\n'? That's most probably why you get the CR/LF error:
QkExM0JGRTNDMUUyRDRCQzA5NjAwNEQ2MjRBNkExMDYwQzBGQjcxODJDMjM0(\n)nMUE2MTNENDRCOTcxRjA2Qzk1Mg=
As mentioned by the colleague in the comments, strict_encode64 is an option, as it complies to RFC 2045.
By the way, I got this insight from here: https://bugs.ruby-lang.org/issues/14664
Hope it helps! :)
First of all, please make sure that you are using the latest version of the SDK (2.2.2.2) from here
So, As I understand while we generate the presigned URL, we have to specify the SSECustomerMethod and when consuming the URL, the "x-amz-server-side-encryption-customer-key" header is set with the customer key, you also need to set the "x-amz-server-side-encryption-customer-algorithm" header.
var getPresignedUrlRequest = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = "EncryptedObject",
SSECustomerMethod= SSECustomerMethod.AES256,
Expires = DateTime.Now.AddMinutes(5)
};
var url = AWSClients.S3.GetPreSignedURL(getPresignedUrlRequest);
var webRequest = HttpWebRequest.Create(url);
webRequest.Headers.Add("x-amz-server-side-encryption-customer-algorithm", "AES256");
webRequest.Headers.Add("x-amz-server-side-encryption-customer-key", base64Key);
using (var response = webRequest.GetResponse())
using (var reader = new StreamReader(response.GetResponseStream()))
{
var contents = reader.ReadToEnd();
}
I don't understand in what circumstances below situation happening (piece of code from AFSecurityPolicy.m, AFPublicKeyForCertificate function, AFNetwork Framework):
policy = SecPolicyCreateBasicX509();
AF_Require_noErr(SecTrustCreateWithCertificates(tempCertificates, policy, &allowedTrust), _out);
AF_Require_noErr(SecTrustEvaluate(allowedTrust, &result), _out);
//result = 5 (kSecTrustResultRecoverableTrustFailure)
//different policy
policy = SecPolicyCreateSSL(true, (__bridge CFStringRef)#"www.MySite.com");
AF_Require_noErr(SecTrustCreateWithCertificates(tempCertificates, policy, &allowedTrust), _out);
AF_Require_noErr(SecTrustEvaluate(allowedTrust, &result), _out);
//result = 4 (kSecTrustResultUnspecified)
Certificate is valid and not expired. Signature algorithm SHA-1. Don't get why it return kSecTrustResultRecoverableTrustFailure and not kSecTrustResultUnspecified for SecPolicyCreateBasicX509.
Please read Apple's documentation for Certificate, Key, and Trust Services
The SecTrustEvaluate function validates a certificate by verifying its signature plus the signatures of the certificates in its certificate chain, up to the anchor certificate, according to the policy or policies included in the trust management object.
As a rule, you should handle the various return values as follows:
kSecTrustResultUnspecified—Evaluation successfully reached an (implicitly trusted) anchor certificate without any evaluation failures, but never encountered any explicitly stated user-trust preference. This is the most common return value.
kSecTrustResultRecoverableTrustFailure—This means that you should not trust the chain as-is, but that the chain could be trusted with some minor change to the evaluation context, such as ignoring expired certificates or adding an additional anchor to the set of trusted anchors.
I have read quite a few posts and sources now but couldn't find a definite answer.
I'm getting kSecTrustResultRecoverableTrustFailure on my SecTrustEvaluate() call and I would like to figure out why this is so (i.e. I want to figure out where exactly the trust chain validation fails and why). on OSX there seem to be some related function called SecTrustGetResult, but this is deprecated now even on OSX
How can I figure out where the validation fails? i'm fine with using private API's as I'm using this only during debugging to understand what exactly is going on inside.
thanks
Just use SecTrustCopyProperties() after calling SecTrustEvaluate():
SecTrustRef trust = ...;
SecTrustResultType trustResult = kSecTrustResultOtherError;
OSStatus status = SecTrustEvaluate(trust, &trustResult);
if (trustResult == kSecTrustResultRecoverableTrustFailure) {
NSArray * trustProperties = (__bridge_transfer id)
SecTrustCopyProperties(certTrust);
}
trustProperties is an array of dictionaries, one dictionary per cert in the cert chain evaluated. Every dictionary has an entry title, containing the name of the cert and if the cert didn't evaluate, it also contains an entry error containing the error. E.g. if the problem was that the cert has expired, the value of error will be CSSMERR_TP_CERT_EXPIRED.
I recently came across a situation where I absolutely needed to use the method OpenSSL::PKey::RSA#params. However, the doc says the following:
THIS METHOD IS INSECURE, PRIVATE INFORMATION CAN LEAK OUT!!!
...
Don’t use :-)) (It’s up to you)
What does this mean? How is the private key normally protected within the instance of the RSA key and how is this different from any regular object?
Can I prevent information from leaking by doing something like this, where the method is only accessed within a lambda:
private_key = OpenSSL::PKey::RSA.generate(2048)
save_private = lambda do
key = OpenSSL::Digest::SHA512.new.digest("password")
aes = OpenSSL::Cipher.new("AES-256-CFB")
iv = OpenSSL::Random.random_bytes(aes.iv_len)
aes.encrypt
aes.key, aes.iv = key, iv
aes.update(private_key.params.to_s) + aes.final
end
private_enc, save_private = save_private.call, nil
Also, if this security problem has anything to do with variables lingering in memory awaiting GC, can forcing garbage collection make things more secure?
GC.start
Thanks in advance to anybody who can clear this up.
It seems to give away information of the private key. The key components need to be available to perform any signing operation or decryption so normally the key information is in memory in the clear. Obviously if you retrieve it you must make sure that you keep it safe. I presume that this is where the warning comes in.
You can do all kinds of things like encrypting the private key parameters, but then you get to a point where you have to store the decryption key. Basically this will end up being impossible to solve without an external system (or a person keeping a password).
I have an RSA public key, some data and a signature of that data. I need to verify the signature. However, the signature is not of a digest of the data, but of the entire data. (The data itself is only 16 bytes, so the signer doesn't bother to hash the data before signing it.) I can verify the signature in C by specifying a NULL engine when initializing the context:
EVP_PKEY_CTX *ctx = EVP_PKEY_CTX_new(verify_key, NULL);
However, I have been unable to find an equivalent in Ruby's OpenSSL::PKey::PKey verify method. That method requires a Digest object, and there is no Digest that I can find that doesn't actually hash but just returns the data as-is. I tried creating my own Digest subclass, but I don't believe that can work, since the underlying OpenSSL library won't know about the existence of a custom digest type.
Am I stuck, or is there a way to solve this problem given that I cannot modify the code run by the signer?
Summarizing the answer from the comments in order to remove this question from the "Unanswered" filter...
owlstead:
Have you tried to find a function like public_decrypt? It may work, as normally you should not encryption with a private key and decrypt with a public key. With a bit of luck it will accept the signature version of PKCS#1 padding (note that the padding used for encryption and signing is different in PKCS#1).
Wammer:
Of course - decrypting the signature with the public key and verifying that it matches the data works fine. So far this is working fine with the standard PKCS#1 padding, but I'll do some more research to see if the differing encryption and signing paddings are a problem in practice. Thanks.
owlstead:
After a decrypt and validation of the padding, all that is left is a (if possible, secure) compare. So that would replace the verification function pretty well. Most of the security is in the modular arithmetic and padding.