I'm trying to use S3´s pre-signed URLs with an enforced Content-MD5. Therefore I'm basically trying to follow the example of their Docs. Obviously I'm doing something wrong.
Here is the checksum of the file I try to upload:
➜ md5 testfile.txt
MD5 (testfile.txt) = ce0a4a83c88c2e7562968f03076ae62f
Here is the code:
func main() {
sess, err := session.NewSession(&aws.Config{
Region: aws.String("eu-central-1")},
)
svc := s3.New(sess)
resp, _ := svc.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String("bucket"),
Key: aws.String("testfile.txt"),
})
md5 := "ce0a4a83c88c2e7562968f03076ae62f" // hard coded & pasted from "$ md5 testfile.txt"
md5s := base64.StdEncoding.EncodeToString([]byte(md5))
resp.HTTPRequest.Header.Set("Content-MD5", md5s)
url, err := resp.Presign(15 * time.Minute)
if err != nil {
fmt.Println("error presigning request", err)
return
}
fmt.Printf("curl -XPUT -H \"Content-MD5: %s\" %s --upload-file %s\n\n", md5s, url, "testfile.txt")
}
Which should give me a ready-to-use curl command like: curl -XPUT -H "Content-MD5: Y2UwYTRhODNjODhjMmU3NTYyOTY4ZjAzMDc2YWU2MmY=" https://bucket.s3.eu-central-1.amazonaws.com/testfile.txt<super-long-url> --upload-file testfile.txt
Unfortunately the request always fails with this message:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidDigest</Code><Message>The Content-MD5 you specified was invalid.</Message><Content-MD5>Y2UwYTRhODNjODhjMmU3NTYyOTY4ZjAzMDc2YWU2MmY=</Content-MD5><RequestId>24F73D8948824799</RequestId><HostId>uKgSjxi03P4EvBk+Yo/EzxqWT0AI6AN3FPB2bKKAtgVjp8t4q2Ku+Tvui108vIQgcwgfvQdwmrk=</HostId></Error>
As I was a bit unsure whether I should request with the base64 of the MD5 I tried it with the normal MD5 as well which responses with
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId><accesskeyid></AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
20180127T215418Z
20180127/eu-central-1/s3/aws4_request
e9580e510332d2fe8811209a8952e849022a56b93a02eca037fa43a10dec680f</StringToSign><SignatureProvided><signature></SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 31 38 30 31 32 37 54 32 31 35 34 31 38 5a 0a 32 30 31 38 30 [...]
</StringToSignBytes><CanonicalRequest>PUT
/testfile.txt
X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=<accesskeyid>%2F20180127%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20180127T215418Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bhost
content-md5:ce0a4a83c88c2e7562968f03076ae62f
host:bucket.s3.eu-central-1.amazonaws.com
content-md5;host
UNSIGNED-PAYLOAD</CanonicalRequest><CanonicalRequestBytes>50 55 54 0a 2f 74 65 73 74 66 69 6c 65 2e 74 78 74 0a 58 2d 41 6d 7a 2d 41 6c 67 6f 72 69 74 68 6d 3d 41 57 53 34 2d 48 4d 41 43 [...]
] 44</CanonicalRequestBytes><RequestId>D92C97EE37BE602A</RequestId><HostId>VatR9cidZlUgq+Ngd5vkZ+wHNiumsCPhx/TvZnwImAkj/STZ0eXazVrwGPRdketBbICd91VLG9E=</HostId></Error>
An upload works as soon as I remove the header settingresp.HTTPRequest.Header.Set("Content-MD5", md5s) and request with curl -XPUT https://bucket.s3.eu-central-1.amazonaws.com/testfile.txt<super-long-url> --upload-file testfile.txt.
What am I doing wrong?
Because of the way base64 encoding works, the base64 representation of an md5 will always be exactly 24 characters long and the last 2 characters will always be ==. As you see, yours is about twice as long as it should be.
An actual md5 digest/hash is only 16 bytes (128 bits) long, and is a non-printable binary blob.
The md5sum utility and similar tools return the digest in a hex-encoded, printable format, which is 32 bytes long, consisting of only the characters 0-9 and a-f... it's the same value, but it's already been passed through hex-encoding, so that isn't the representation that you need to start with, if you want to base64-encode the md5 as required in the Content-MD5 header.
openssl dgst -md5 -binary {filename} will generate the binary representation of the md5 of the file, or you can use a pipe to actually generate the final base-64 representation with openssl dgst -md5 -binary {filename} | base64.
Note that this has nothing to do with ssl of course, but I used the openssl dgst tool for this example because it's probably something you already happen to have on your system, as well as the base64 conversion tool, which is probably already there, too.
Related
I want to decode a base64 encoded string to human readable data, and seeking right encoding for the same.
This is the command that i am trying
echo H4sICJVHi14AA2ZsYWcyLnR4dAAzsvLzdHb193O1Kkktyk3KzLNKLjMp4gIAtRX2oBcAAAA= | base64 -d
Abve outputs to some fuzzy non human readable data.
�G�^flag2.txt3���tv��s�*I-�M�̳J.3)����
Why many characters are missed?
How can i read all the characters?
My gnome terminal is set to utf-8. Is there a better / wider encoding ? How do i set that?
Your Base64 encoded data is binary with mixed printable characters and mixed non-printable.
Lets see what it actually contain with hexdump:
<<<'H4sICJVHi14AA2ZsYWcyLnR4dAAzsvLzdHb193O1Kkktyk3KzLNKLjMp4gIAtRX2oBcAAAA=' base64 -d | hexdump -C
00000000 1f 8b 08 08 95 47 8b 5e 00 03 66 6c 61 67 32 2e |.....G.^..flag2.|
00000010 74 78 74 00 33 b2 f2 f3 74 76 f5 f7 73 b5 2a 49 |txt.3...tv..s.*I|
00000020 2d ca 4d ca cc b3 4a 2e 33 29 e2 02 00 b5 15 f6 |-.M...J.3)......|
00000030 a0 17 00 00 00 |.....|
00000035
You can extract valid text with the strings command:
<<<'H4sICJVHi14AA2ZsYWcyLnR4dAAzsvLzdHb193O1Kkktyk3KzLNKLjMp4gIAtRX2oBcAAAA=' base64 -d | strings
flag2.txt
J.3)
Or save it to a bin file:
<<<'H4sICJVHi14AA2ZsYWcyLnR4dAAzsvLzdHb193O1Kkktyk3KzLNKLjMp4gIAtRX2oBcAAAA=' >file.bin base64 -d
Lets check what it is:
file file.bin
file.bin: gzip compressed data, was "flag2.txt", last modified: Mon Apr 6 15:15:33 2020, from Unix, original size modulo 2^32 23
Since it is a gzip'ed data, lets gunzip it:
<file.bin gunzip
2:NICEONE:termbin:cv4r
Or doing it all in one-line:
<<<'H4sICJVHi14AA2ZsYWcyLnR4dAAzsvLzdHb193O1Kkktyk3KzLNKLjMp4gIAtRX2oBcAAAA=' base64 -d | gunzip
2:NICEONE:termbin:cv4r
It appears that golang doesn't support all unicode characters for its runes
package main
import "fmt"
func main() {
standardSuits := []rune{'♠️', '♣️', '♥️', '♦️'}
fmt.Println(standardSuits)
}
Generates the following error:
./main.go:6: missing '
./main.go:6: invalid identifier character U+FE0F '️'
./main.go:6: syntax error: unexpected ️, expecting comma or }
./main.go:6: missing '
./main.go:6: invalid identifier character U+FE0F '️'
./main.go:6: missing '
./main.go:6: invalid identifier character U+FE0F '️'
./main.go:6: missing '
./main.go:6: invalid identifier character U+FE0F '️'
./main.go:6: missing '
./main.go:6: too many errors
Is there a way to get around this, or should I just live with this limitation and use something else?
It looks to me like a parsing issue. You could use the unicode points to produce that runes, which should give the same result as using the chars.
package main
import "fmt"
func main() {
standardSuits := []rune{'\u2660', '\u2663', '\u2665', '\u2666', '⌘'}
fmt.Println(standardSuits)
}
Generates
[9824 9827 9829 9830 8984]
Playground link: https://play.golang.org/p/jTLsbs7DM1
I added the additional 5th rune to check if the result from code point or char gives the same. Looks like it does.
Edit:
Not sure what is wrong with your chars (did not view them in a hex editor, have none around), but something is strange about them.
I also got this to run by copy pasting the chars from Wikipedia:
package main
import "fmt"
func main() {
standardSuits := []rune{'♠', '♣', '♥', '♦'}
fmt.Println(standardSuits)
}
https://play.golang.org/p/CKR0u2_IIB
The unicode string you use in your source code consist of more than one "character", but a character constant '...' is not allowed to contain strings of length greater than one. In more detail:
If I copy&paste your source code and print a hexdump, I can see the exact bytes in your source code:
>>> hexdump -C x.go
00000000 70 61 63 6b 61 67 65 20 6d 61 69 6e 0a 0a 69 6d |package main..im|
00000010 70 6f 72 74 20 22 66 6d 74 22 0a 0a 66 75 6e 63 |port "fmt"..func|
00000020 20 6d 61 69 6e 28 29 20 7b 0a 20 20 73 74 61 6e | main() {. stan|
00000030 64 61 72 64 53 75 69 74 73 20 3a 3d 20 5b 5d 72 |dardSuits := []r|
00000040 75 6e 65 7b 27 e2 99 a0 ef b8 8f 27 2c 20 27 e2 |une{'......', '.|
00000050 99 a3 ef b8 8f 27 2c 20 27 e2 99 a5 ef b8 8f 27 |.....', '......'|
00000060 2c 20 27 e2 99 a6 ef b8 8f 27 7d 0a 20 20 66 6d |, '......'}. fm|
00000070 74 2e 50 72 69 6e 74 6c 6e 28 73 74 61 6e 64 61 |t.Println(standa|
00000080 72 64 53 75 69 74 73 29 0a 7d 0a |rdSuits).}.|
This shows, for example, that your '♠️' is encoded using the hex bytes e2 99 a0 ef b8 8f. In utf-8 encoding this corresponds to the two(!) characters \u2660 \uFE0F. This is not obvious by looking at the code, since \uFE0F is no printable character, but Go complains, because you have more than one character in a character constant. Using '♠' or '\u2660' instead works as expected.
I'm extracting the modulus and exponent from a public SSH key with the goal of generating a PEM public key. Here is my code so far:
require "base64"
require "openssl"
def unpacked_byte_array(ssh_type, encoded_key)
prefix = [7].pack("N") + ssh_type
decoded = Base64.decode64(encoded_key)
# Base64 decoding is too permissive, so we should validate if encoding is correct
unless Base64.encode64(decoded).gsub("\n", "") == encoded_key && decoded.slice!(0, prefix.length) == prefix
raise PublicKeyError, "validation error"
end
data = []
until decoded.empty?
front = decoded.slice!(0,4)
size = front.unpack("N").first
segment = decoded.slice!(0, size)
unless front.length == 4 && segment.length == size
raise PublicKeyError, "byte array too short"
end
data << OpenSSL::BN.new(segment, 2)
end
return data
end
module OpenSSL
module PKey
class RSA
def self.new_from_parameters(n, e)
a = self.new # self.new(64) for ruby < 1.8.2
a.n = n # converted to OpenSSL::BN automatically
a.e = e
a
end
end
end
end
e, n = unpacked_byte_array('ssh-rsa', 'AAAAB3NzaC1yc2EAAAABIwAAAQEA3RC8whKGFx+b7BMTFtnIWl6t/qyvOvnuqIrMNI9J8+1sEYv8Y/pJRh0vAe2RaSKAgB2hyzXwSJ1Fh+ooraUAJ+q7P2gg2kQF1nCFeGVjtV9m4ZrV5kZARcQMhp0Bp67tPo2TCtnthPYZS/YQG6u/6Aco1XZjPvuKujAQMGSgqNskhKBO9zfhhkAMIcKVryjKYHDfqbDUCCSNzlwFLts3nJ0Hfno6Hz+XxuBIfKOGjHfbzFyUQ7smYnzF23jFs4XhvnjmIGQJcZT4kQAsRwQubyuyDuqmQXqa+2SuQfkKTaPOlVqyuEWJdG2weIF8g3YP12czsBgNppz3jsnhEgstnQ==')
rsa = OpenSSL::PKey::RSA.new_from_parameters(n, e)
puts rsa
The goal is to have a pure Ruby implementation of what ssh-keygen -f <file> -e -m pem does.
Now, comparing the results, they look very similar, but my code returns a few more bytes at the beginning of the key:
$ ssh-keygen -f ~/.ssh/id_rsa_perso.pub -e -m pem
-----BEGIN RSA PUBLIC KEY-----
MIIBCAKCAQEA3RC8whKGFx+b7BMTFtnIWl6t/qyvOvnuqIrMNI9J8+1sEYv8Y/pJ
Rh0vAe2RaSKAgB2hyzXwSJ1Fh+ooraUAJ+q7P2gg2kQF1nCFeGVjtV9m4ZrV5kZA
RcQMhp0Bp67tPo2TCtnthPYZS/YQG6u/6Aco1XZjPvuKujAQMGSgqNskhKBO9zfh
hkAMIcKVryjKYHDfqbDUCCSNzlwFLts3nJ0Hfno6Hz+XxuBIfKOGjHfbzFyUQ7sm
YnzF23jFs4XhvnjmIGQJcZT4kQAsRwQubyuyDuqmQXqa+2SuQfkKTaPOlVqyuEWJ
dG2weIF8g3YP12czsBgNppz3jsnhEgstnQIBIw==
-----END RSA PUBLIC KEY-----
$ ruby ssh2x509.rb
-----BEGIN PUBLIC KEY-----
MIIBIDANBgkqhkiG9w0BAQEFAAOCAQ0AMIIBCAKCAQEA3RC8whKGFx+b7BMTFtnI
Wl6t/qyvOvnuqIrMNI9J8+1sEYv8Y/pJRh0vAe2RaSKAgB2hyzXwSJ1Fh+ooraUA
J+q7P2gg2kQF1nCFeGVjtV9m4ZrV5kZARcQMhp0Bp67tPo2TCtnthPYZS/YQG6u/
6Aco1XZjPvuKujAQMGSgqNskhKBO9zfhhkAMIcKVryjKYHDfqbDUCCSNzlwFLts3
nJ0Hfno6Hz+XxuBIfKOGjHfbzFyUQ7smYnzF23jFs4XhvnjmIGQJcZT4kQAsRwQu
byuyDuqmQXqa+2SuQfkKTaPOlVqyuEWJdG2weIF8g3YP12czsBgNppz3jsnhEgst
nQIBIw==
-----END PUBLIC KEY-----
Notice my output has the content of the ssh-keygen output, but with MIIBIDANBgkqhkiG9w0BAQEFAAOCAQ0A prepended.
What could cause these extra bytes, and how could I get the proper result?
It seems the output format for RSA keys in Ruby OpenSSL was changed in 1.9.3 from PKCS#1 (used by OpenSSH) to X509 (used by OpenSSL post 1.9.3):
https://redmine.ruby-lang.org/issues/4421
What is suggested in this bug report is to emulate the PKCS#1 with:
ary = [OpenSSL::ASN1::Integer.new(n), OpenSSL::ASN1::Integer.new(e)]
pub_key = OpenSSL::ASN1::Sequence.new(ary)
base64 = Base64.encode64(pub_key.to_der)
#This is the equivalent to the PKCS#1 encoding used before 1.9.3
pem = "-----BEGIN RSA PUBLIC KEY-----\n#{base64}-----END RSA PUBLIC KEY-----"
The monkey patching of OpenSSL::PKey::RSA is thus not necessary.
To solve this problem, you can analyze ASN1 structure.
For you output, it is
SEQUENCE(2 elem)
SEQUENCE(2 elem)
OBJECT IDENTIFIER1.2.840.113549.1.1.1
NULL
BIT STRING(1 elem)
SEQUENCE(2 elem)
INTEGER(2048 bit) 279069188856447290054297383130027286257044344789969750715307012565210…
INTEGER35
For ssh output, it is
SEQUENCE(2 elem)
INTEGER(2048 bit) 279069188856447290054297383130027286257044344789969750715307012565210…
INTEGER35
What does this mean? It means your RSA key is structured differently. In SSH, it just contains sequence of 2048 bit integer. Whereas, in your case, it also carries object identification.
Solution? Remove those starting bits which you can calculate by analyzing ASN1 structure.
Or analyze by hexdump that how many bytes are to be removed from your RSA public key.
Your RSA public key:
30 82 01 20 30 0D 06 09 2A 86 48 86 F7 0D 01 01
01 05 00 03 82 01 0D 00 **30 82 01 08 02 82 01 01
00 DD 10 BC C2** 12 86 17 1F 9B EC 13 13 16 D9 C8
5A 5E AD FE AC AF 3A F9 EE A8 8A CC 34 8F 49 F3
ED 6C 11 8B FC 63 FA 49 46 1D 2F 01 ED 91 69 22
80 80 1D A1 CB 35 F0 48 9D 45 87 EA 28 AD A5 00
27 EA BB 3F 68 20 DA 44 05 D6 70 85 78 65 63 B5
… skipping 160 bytes …
0F D7 67 33 B0 18 0D A6 9C F7 8E C9 E1 12 0B 2D
9D 02 01 23
SSH RSA public key
**30 82 01 08 02 82 01 01 00 DD 10 BC C2** 12 86 17
1F 9B EC 13 13 16 D9 C8 5A 5E AD FE AC AF 3A F9
EE A8 8A CC 34 8F 49 F3 ED 6C 11 8B FC 63 FA 49
46 1D 2F 01 ED 91 69 22 80 80 1D A1 CB 35 F0 48
9D 45 87 EA 28 AD A5 00 27 EA BB 3F 68 20 DA 44
… skipping 160 bytes …
74 6D B0 78 81 7C 83 76 0F D7 67 33 B0 18 0D A6
9C F7 8E C9 E1 12 0B 2D 9D 02 01 23
By analyzing this, you can see that you have to remove these:
30 82 01 20 30 0D 06 09 2A 86 48 86 F7 0D 01 01
01 05 00 03 82 01 0D 00
Means 24 bytes. Remove 24 bytes from your key.
Or you can use ASN1 parser and just extract sequence.
I realize this is a very similar post to others (e.g. this one), but there are details missing from the posts which might be significant for my case.
To start with, here's my simplified program:
#include "stdafx.h"
#include <windows.h>
#include <wincrypt.h>
int _tmain(int argc, _TCHAR* argv[])
{
// usage: CertExtract certpath
char keyFile[] = "C:\\Certificates\\public.crt";
BYTE lp[65536];
SECURITY_ATTRIBUTES sa;
HANDLE hKeyFile;
DWORD bytes;
PCCERT_CONTEXT certContext;
sa.nLength = sizeof(sa);
sa.lpSecurityDescriptor = NULL;
sa.bInheritHandle = FALSE;
hKeyFile = CreateFile(keyFile, GENERIC_READ, FILE_SHARE_READ, &sa, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if (hKeyFile) {
if (ReadFile(hKeyFile, lp, GetFileSize(hKeyFile, NULL), &bytes, NULL) && bytes > 0) {
certContext = CertCreateCertificateContext(X509_ASN_ENCODING, lp, bytes);
if (certContext) {
printf("yay!");
CertFreeCertificateContext(certContext);
}
else {
printf("Could not convert certificate to internal form\n");
}
}
else {
printf("Failed to read key file: %s\n", keyFile);
}
}
else {
printf("Failed to open key file: %s\n", keyFile);
}
CloseHandle(hKeyFile);
return 0;
}
In order to create the certificate, I used the following steps with OpenSSL:
C:\Certificates>openssl genrsa -out private.key 1024
Loading 'screen' into random state - done
Generating RSA private key, 1024 bit long modulus
......................................++++++
................++++++
e is 65537 (0x10001)
C:\Certificates>openssl req -new -key private.key -out public.csr
Loading 'screen' into random state - done
C:\Certificates>copy private.key private.key.org
1 file(s) copied.
C:\Certificates>openssl rsa -in private.key.org -out private.key
writing RSA key
C:\Certificates>openssl x509 -req -days 365 -in public.csr -signkey private.key -ou
t public.crt
Loading 'screen' into random state - done
Signature ok
subject=/CN=My Signing Cert
Getting Private key
with the following conf file:
RANDFILE = .rnd
[ req ]
distinguished_name = req_distinguished_name
prompt = no
[ req_distinguished_name ]
commonName = My Signing Cert
The certificate file looks like:
-----BEGIN CERTIFICATE-----
MIIBqzCCARQCCQDUJyWk0OxlRTANBgkqhkiG9w0BAQUFADAaMRgwFgYDVQQDDA9N
eSBTaWduaW5nIENlcnQwHhcNMTYwMTA1MjIzODU5WhcNMTcwMTA0MjIzODU5WjAa
MRgwFgYDVQQDDA9NeSBTaWduaW5nIENlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0A
MIGJAoGBAJobIhfSSMLEPeG9SOBelWHo4hjKXe8dT6cllPr6QXdXe2VNLh9fxVlx
spVGFQwjlF3OHYnmSQnY3m2b5wlFNYVuHvy8rUsZWOF4drSbiqWKh0TuJ+4MBeGq
EormTJ+kiGqNm5IVRrTu9OV8f0XQTGV1pxHircQxsGhxY5w0QTjjAgMBAAEwDQYJ
KoZIhvcNAQEFBQADgYEAedqjKfMyIFC8nUbJ6t/Y8D+fJFwCcdwojUFizr78FEwA
IZSas1b1bXSkA+QEooW7pYdBAfzNuD3WfZAIZpqFlr4rPNIqHzYa0OIdDPwzQQLa
3zPKqjj6QeTWEi5/ArzO+sTVv4m3Og3GQjMChb8H/GxsWdbComPVP82DTUet+ZU=
-----END CERTIFICATE-----
Converting the PEM-encoding to hex allows me to identify the parts of the certificate:
30 SEQUENCE //Certificate
(82 01 AB)
30 SEQUENCE //tbsCertificate
(82 01 14)
02 INTEGER //serialNumber
(09)
00 D4 27 25 A4 D0 EC 65 45
30 SEQUENCE //signature
(0D)
06 OBJECT IDENTIFIER
(09)
2A 86 48 86 F7 0D 01 01 05
05 NULL
(00)
30 SEQUENCE //issuer
(1A)
31 SET
(18)
30 SEQUENCE
(16)
06 OBJECT IDENTIFIER
(03)
55 04 03
0C UTF8String
(0F)
4D 79 20 53 69 67 6E 69 6E 67 20 43 65 72 74
30 SEQUENCE //validity
(1E)
17 UTCTime
(0D)
31 36 30 31 30 35 32 32 33 38 35 39 5A
17 UTCTime
(0D)
31 37 30 31 30 34 32 32 33 38 35 39 5A
30 SEQUENCE //subjectName
(1A)
31 SET
(18)
30 SEQUENCE
(16)
06 OBJECT IDENTIFIER
(03)
55 04 03
0C UTF8String
(0F)
4D 79 20 53 69 67 6E 69 6E 67 20 43 65 72 74
30 SEQUENCE //subjectPublicKeyInfo
(81 9F)
30 SEQUENCE //algorithmId
(0D)
06 OBJECT IDENTIFIER //algorithm
(09)
2A 86 48 86 F7 0D 01 01 01
05 NULL
(00)
03 BIT STRING //subjectPublicKey
(81 8D)
[00] //padding bits
30 SEQUENCE //RSAPublicKey
(81 89)
02 INTEGER //modulus
(81 81)
00 9A 1B 22 17 D2 48 C2 C4 3D E1 BD 48 E0 5E 95 61 E8 E2 18 CA 5D EF 1D 4F A7 25 94 FA FA 41 77 57 7B 65 4D 2E 1F 5F C5 59 71 B2 95 46 15 0C 23 94 5D CE 1D 89 E6 49 09 D8 DE 6D 9B E7 09 45 35 85 6E 1E FC BC AD 4B 19 58 E1 78 76 B4 9B 8A A5 8A 87 44 EE 27 EE 0C 05 E1 AA 12 8A E6 4C 9F A4 88 6A 8D 9B 92 15 46 B4 EE F4 E5 7C 7F 45 D0 4C 65 75 A7 11 E2 AD C4 31 B0 68 71 63 9C 34 41 38 E3 02 03 01 00 01
30 SEQUENCE //signatureAlgorithm
(0D)
06 OBJECT IDENTIFIER
(09)
2A 86 48 86 F7 0D 01 01 05
05 NULL
(00)
03 BIT STRING //signatureValue
(81 81)
[00] //padding bits
79 DA A3 29 F3 32 20 50 BC 9D 46 C9 EA DF D8 F0 3F 9F 24 5C 02 71 DC 28 8D 41 62 CE BE FC 14 4C 00 21 94 9A B3 56 F5 6D 74 A4 03 E4 04 A2 85 BB A5 87 41 01 FC CD B8 3D D6 7D 90 08 66 9A 85 96 BE 2B 3C D2 2A 1F 36 1A D0 E2 1D 0C FC 33 41 02 DA DF 33 CA AA 38 FA 41 E4 D6 12 2E 7F 02 BC CE FA C4 D5 BF 89 B7 3A 0D C6 42 33 02 85 BF 07 FC 6C 6C 59 D6 C2 A2 63 D5 3F CD 83 4D 47 AD F9 95
which appears to conform to the X.509 specs (as I would expect it to):
Certificate ::= {
tbsCertificate TBSCertificate,
signatureAlgorithm AlgorithmIdentifier,
signatureValue BIT STRING
}
TBSCertificate ::= SEQUENCE {
version [0] Version DEFAULT v1, <-- what does this mean?
serialNumber INTEGER,
signature AlgorithmIdentifier,
issuer Name,
validity Validity,
subjectName Name,
subjectPublicKeyInfo SubjectPublicKeyInfo
...
}
with the lone exception of the version part, which isn't clear to me whether it is optional or not (though it never seems to be added with certificates I create with OpenSSL).
I can open the certificate to import into a certificate store (and can successfully import to a store), so I don't think anything is specifically wrong with the file/encoding.
When I reach the call to CertCreateCertificateContext, my lp buffer looks like:
-----BEGIN CERTIFICATE-----\nMIIBqzCCARQCCQDUJyWk0OxlRTANBgkqhkiG9w0BAQUFADAaMRgwFgYDVQQDDA9N\neSBTaWduaW5nIENlcnQwHhcNMTYwMTA1MjIzODU5WhcNMTcwMTA0MjIzODU5WjAa\nMRgwFgYDVQQDDA9NeSBTaWduaW5nIENlcnQwgZ8wDQ...
and bytes = 639 -- which is the file size.
I've tried adding logic to strip out the certificate comments, but examples of importing a certificate in this manner don't indicate that should be necessary.
I've tried setting the dwCertEncodingType to X509_ASN_ENCODING | PKCS_7_ASN_ENCODING and PKCS_7_ASN_ENCODING out of desperation (though I don't believe I am using PKCS#7 encoding here...a little fuzzy on that).
Does anyone have any suggestions on what I might be doing incorrectly here? I appreciate it.
I figured out my issue. CertCreateCertificateContext is expecting the binary ASN.1 data, not the PEM-encoded certificate I created with openssl. I figured this out by using a Microsoft certificate generation tool and testing that certificate out:
C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin>makecert.exe -n "CN=Test Signing Cert" -b 0
1/06/2016 -e 01/06/2017 -len 1024 -r C:\Certificates\public_v2.crt
Succeeded
looking at the file in a hex editor, it looked precisely like the ASN.1 binary data. next, I used the Copy to File feature from the certificate viewer that launches when you double-click a certificate to copy my original public.crt file to a DER encoded binary X.509 (.CER) file and verified that my program began to work (that is, the CertCreateCertificateContext was now happy).
so, in case someone else is bumping up against the same issue I was having, here is a complete solution to importing a PEM-encoded certificate from a file into memory for use with the Crypto API:
#include "stdafx.h"
#include <windows.h>
#include <wincrypt.h>
#define LF 0x0A
int _tmain(int argc, _TCHAR* argv[])
{
char keyFile[] = "C:\\Certificates\\public.crt";
BYTE lp[65536];
SECURITY_ATTRIBUTES sa;
HANDLE hKeyFile;
DWORD bytes;
PCCERT_CONTEXT certContext;
BYTE *p;
DWORD flags;
sa.nLength = sizeof(sa);
sa.lpSecurityDescriptor = NULL;
sa.bInheritHandle = FALSE;
hKeyFile = CreateFile(keyFile, GENERIC_READ, FILE_SHARE_READ, &sa, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if (hKeyFile) {
if (ReadFile(hKeyFile, lp, GetFileSize(hKeyFile, NULL), &bytes, NULL) && bytes > 0) {
p = lp + bytes;
if (CryptStringToBinary((char *)lp, p - lp, CRYPT_STRING_BASE64_ANY, p, &bytes, NULL, &flags) && bytes > 0) {
certContext = CertCreateCertificateContext(X509_ASN_ENCODING, p, bytes);
if (certContext) {
printf("yay!");
CertFreeCertificateContext(certContext);
}
else {
printf("Could not convert certificate to internal form\n");
}
}
else {
printf("Failed to convert from PEM");
}
}
else {
printf("Failed to read key file: %s\n", keyFile);
}
}
else {
printf("Failed to open key file: %s\n", keyFile);
}
CloseHandle(hKeyFile);
return 0;
}
Note:
because I'm lazy, I decode the PEM encoding to binary in the same BYTE array I used to load the file into -- for this simple test, it was expedient, but if you're looking to implement this sort of thing for real, I wouldn't recommend it
I am using OpenSSL in Ruby 1.8.7 and OpenSSL in Bash to decode a file, but with the Ruby code, the first 16 bytes in the decrypted file are wrong.
This is the result I get with Ruby
cf e8 cf d1 12 e2 75 48 59 56 30 30 7d 7d 30 1b | wrong bytes
00 00 00 08 00 0c 01 1a 00 05 00 00 00 01 00 00 | good bytes
01 46 01 1b 00 05 00 00 00 01 00 00 01 4e 01 28 | good bytes
********************good bytes****************** | good bytes
and this is the result I get with OpenSSL in Bash
ff d8 ff e1 22 d2 45 78 69 66 00 00 4d 4d 00 2a | correct bytes
00 00 00 08 00 0c 01 1a 00 05 00 00 00 01 00 00 | same bytes as in Ruby
01 46 01 1b 00 05 00 00 00 01 00 00 01 4e 01 28 | same bytes as in Ruby
*******************a lot of bytes*************** | same bytes as in Ruby
Ruby code:
require 'openssl'
c = OpenSSL::Cipher::Cipher.new("aes-128-cbc")
c.decrypt
c.key = "\177\373\2002\337\363:\357\232\251.\232\311b9\323"
c.iv = "00000000000000000000000000000001"
data = File.read("/tmp/file_crypt")
d = c.update(data)
d << c.final
file = File.open("/tmp/file_decrypt_ruby", "w")
file.write(d)
file.close
Bash OpenSSL command:
openssl aes-128-cbc -d -in /tmp/file_crypt -out /tmp/file_decrypt_bash -nosalt -iv 00000000000000000000000000000001 -K 7ffb8032dff33aef9aa92e9ac96239d3
The encoded file can be downloaded here: http://pastebin.com/EqHfpxjZ. Use "pbget" (if you have it) to download the file. Otherwise, copy the text, base 64 decode it, and lzma decompress it. (ex. wget -q -O- "$url" | base64 -d | lzma -d > "$TEMP").
Once you have the file through pbget, or the commands above, you'll need to do one final base 64 decoding:
base64 -d file_encode_base64 > encrypted_file
To make sure you have the correct encrypted file, the MD5 hash is: 30b8f5e7d700c108cd9815c00ca1de2d.
If you use the Bash version of OpenSSL to decode this file, you obtain a picture in JPG format.
But if you use the Ruby version you obtain a data file different than picture.jpg by the first 16 bytes.
For reference, this is the command I used to encrypt the file in the first place:
openssl aes-128-cbc -e -in picture.jpg -out enc_file -nosalt -iv 00000000000000000000000000000001 -K 7ffb8032dff33aef9aa92e9ac96239d3
Can anyone explain why I can decode it with OpenSSL in Bash, but get a slightly different result when I use Ruby?
Finally, it works! And the answer is actually simple. Your IV needs to be converted to binary in the Ruby code, similar to how you converted the key. I found the conversion code and explanation in a comment on this page.
Try this code:
require 'openssl'
cipher = OpenSSL::Cipher::Cipher.new("aes-128-cbc")
cipher.decrypt
cipher.key = "7ffb8032dff33aef9aa92e9ac96239d3".unpack('a2'*16).map{|x| x.hex}.pack('c'*16)
cipher.iv = "00000000000000000000000000000001".unpack('a2'*16).map{|x| x.hex}.pack('c'*16)
data = File.read("/tmp/file_crypt")
decrypted = cipher.update(data) + cipher.final
file = File.open("/tmp/file_decrypt_ruby", "w")
file.write(decrypted)
file.close