I ran openssl speed rsa512 and it shows me how many signs and verifies it can do in a second. Unfortunalely, the test does not say anything about the message size, which is signed. Thus I digged into the openssl sources and found the following line in the speed.c:
ret = RSA_sign(NID_md5_sha1, buf, 36, buf2, rsa_num, rsa_key[testnum]);
Looking into the function in the rsa.h, I can see the following function declaration:
int RSA_sign(int type, const unsigned char *m, unsigned int m_length,
unsigned char *sigret, unsigned int *siglen, RSA *rsa);
I guess, m is the message and m_length is the length of the message.
Am I right that the message size is 36 byte in the RSA speed test?
The same goes for ECDSA, e.g., openssl speed ecdsap256. The speed.c uses the following line:
ret = ECDSA_sign(0, buf, 20, ecdsasig, ecdsasiglen, ecdsa[testnum]);
Am I right that the message size is 20 byte in the ECDSA speed test?
My Conclusion: It's not possible to compare them, since they sign different message lengths.
Asymmetric signatures, technically, don't sign messages. They sign hashes of messages.
Their rsa512 test is doing the RSA signature padding and transformation on an SSL "MD5 and SHA1" value (which is 16 + 20 = 36 bytes). So the number it produces is how many RSA pad-and-sign (and answer-copy) operations it can do, you need to divide that by the time it takes to hash the message.
Their ecdsap256 computation is assuming that the digest was SHA-1 (20 bytes). Again, you would take this number divided by the time it takes to hash a message.
Since they both are in scale terms of the data hashing they're comparable.
Related
I have come across the following code when dealing with Go readers to limit the number of bytes read from a remote client when sending a file through multipart upload (e.g. in Postman).
r.Body = http.MaxBytesReader(w, r.Body, 32<<20+1024)
If I am not mistaken, the above notation should represent 33555456 bytes, or 33.555456 MB (32 * 2 ^ 20) + 1024. Or is this number not correct?
What I don't understand is:
why did the author use it like this? Why using 20 and not some other number?
why the author used the notation +1024 at all? Why didn't he write 33 MB instead?
would it be OK to write 33555456 directly as int64?
If I am not mistaken, the above notation should represent 33555456 bytes, or 33.555456 MB (32 * 2 ^ 20) + 1024. Or is this number not correct?
Correct. You can trivially check it yourself.
fmt.Println(32<<20+1024)
Why didn't he write 33 MB instead?
Because this number is not 33 MB. 33 * 1024 * 1024 = 34603008
would it be OK to write 33555456 directly as int64?
Naturally. That's what it likely is reduced to during compilation anyway. This notation is likely easier to read, once you figure out the logic behind 32, 20 and 1024.
Ease of reading is why I almost always (when not using ruby) write constants like "50 MB" as 50 * 1024 * 1024 and "30 days" as 30 * 86400, etc.
I'm trying to do an MPI sum of 2 bytes integer:
INTEGER, PARAMETER :: SIK2 = SELECTED_INT_KIND(2)
INTEGER(SIK2) :: s_save(dim)
Indeed its an array which takes integer values from 1 to 48 max, so 2 bytes is enough for memory reasons.
Therefore I tried the following:
CALL MPI_TYPE_CREATE_F90_INTEGER(SIK2, int2type, ierr)
CALL MPI_ALLreduce(MPI_IN_PLACE, s_save, nkpt_in, int2type, MPI_SUM, world_comm, ierr)
This works well for Gfortran + openmpi.
However in the case of intel I get a crash:
MPI_Allreduce(1000)......: MPI_Allreduce(sbuf=MPI_IN_PLACE, rbuf=0x55d2160, count=987, dtype=USER<f90_integer>, MPI_SUM, MPI_COMM_WORLD) failed
MPIR_SUM_check_dtype(106): MPI_Op MPI_SUM operation not defined for this datatype
Is there a proper (or recommended) way to do this so that it works for most compilers?
I have troubles to get my streaming over OTG-USB-FS configured as VCP. In my disposition I have nucleo-h743zi board that seems to doing a good job at sending me data, but on PC side I have a problem to receive that data.
for(;;) {
#define number_of_ccr 1024
unsigned int lpBuffer[number_of_ccr] = {0};
unsigned long nNumberOfBytesToRead = number_of_ccr*4;
unsigned long lpNumberOfBytesRead;
QueryPerformanceCounter(&startCounter);
ReadFile(
hSerial,
lpBuffer,
nNumberOfBytesToRead,
&lpNumberOfBytesRead,
NULL
);
if(!strcmp(lpBuffer, "end\r\n")) {
CloseHandle(FileHandle);
fprintf(stderr, "end flag was received\n");
break;
}
else if(lpNumberOfBytesRead > 0) {
// NOTE(): succeed
QueryPerformanceCounter(&endCounter);
time = Win32GetSecondsElapsed(startCounter, endCounter);
char *copyString = "copy";
WriteFile(hSerial, copyString , strlen(copyString), &bytes_written, NULL);
DWORD BytesWritten;
// write data to file
WriteFile(FileHandle, lpBuffer, nNumberOfBytesToRead, &BytesWritten, 0);
}
}
QPC shows that speed was 0.00733297970 - it's one time for one successful data block transfer (1024*4 bytes).
this is the Listener code, I bet that this is not how it should be done, so I here to seek advices. I was hopping that maybe full streaming without control sequences ("copy") will be possible, but in that case I can't receive adjacent data (within one transfer block it's OKAY, but two consecutive received blocks aren't adjacent.
Example:
block_1: 1 2 3 4 5 6
block_2: 13 14 15 16 17 18
Is there any way to speed up my receiving?
(I was trying O2 key without any success)
You need to configure buffer on PC side that will be 2 or 3 times the buffer you are transfer from your board, and use something like double buffer scheme for transferring the data. You transfer the first buffer while filing the second, then alternate.
Good thing to do is to activate caches, and place the buffers in fast memory for stm32h7 (it's 1 domain RAM).
But if your interface do not match the speed you needed, there will be no tricks to do this. Except maybe one, if your controller is fast enough -> you can implement and use lossless data compression on that data of yours and transfer compressed files. If you transmit low entropy data, this could give you a solid boost in speed.
I need to convert the set of bits that are represented via String. My String is multiple of 8, so I can divide it by 8 and get sub-strings with 8 bits inside. Then I have to convert these sub-strings to byte and print it in HEX. For example:
String seq = "0100000010000110";
The seq is much longer, but this is not the topic. Below you can see two sub-strings from the seq. And with one of them I have trouble, why?
String s_ok = "01000000"; //this value is OK to convert
String s_error = "10000110"; //this is not OK to convert but in HEX it is 86 in DEC 134
byte nByte = Byte.parseByte(s_ok, 2);
System.out.println(nByte);
try {
byte bByte = Byte.parseByte(s_error, 2);
System.out.println(bByte);
} catch (Exception e) {
System.out.println(e); //Value out of range. Value:"10000110" Radix:2
}
int in=Integer.parseInt(s_error, 2);
System.out.println("s_error set of bits in DEC - "+in + " and now in HEX - "+Integer.toHexString((byte)in)); //s_error set of bits in DEC - 134 and now in HEX - ffffff86
I can't understand why there is an error, for calculator it is not a problem to convert 10000110. So, I tried Integer and there are ffffff86 instead of simple 86.
Please help with: Why? and how to avoid the issue.
Well, I found how to avoid ffffff:
System.out.println("s_error set of bits in DEC - "+in + " and now in HEX - "+Integer.toHexString((byte)in & 0xFF));
0xFF was added. Bad thing is - I still don't know from where those ffffff came and I it is not clear for, what I've done. Is it some kind of byte multiplication or is it masking? I'm lost.
I have a certificate x509 in base 64 binary format. How can I retrieve the information about the certificate with Oracle? I have to get serial number of this certificate. Any ideas?
There is a solution on the Oracle forum : SQL to extract specific attributes from an x509 digital certificate
The code (the original is for certificates stored as CLOB, I modified it for BLOB and to return the serial number):
create or replace and compile java source named testx509src
as
import java.security.cert.*;
import java.io.*;
import java.sql.*;
import oracle.sql.BLOB;
import oracle.sql.NUMBER;
public class TestX509 {
public static NUMBER getSerialNumber(BLOB cert)
throws SQLException, IOException, CertificateException {
Connection conn = (Connection) DriverManager.getConnection("jdbc:default:connection:");
BufferedInputStream is = new BufferedInputStream(cert.getBinaryStream());
CertificateFactory cf = CertificateFactory.getInstance("X.509");
X509Certificate c = (X509Certificate) cf.generateCertificate(is);
is.close();
return new oracle.sql.NUMBER(c.getSerialNumber());
}
}
/
CREATE OR REPLACE FUNCTION CERT_getSerialNumber(cert in blob)
RETURN NUMBER
AS LANGUAGE JAVA
NAME 'TestX509.getSerialNumber(oracle.sql.BLOB) return oracle.sql.NUMBER';
/
SQL> select CERT_GetSerialNumber(cert) serial from cert_storage where id = 1;
serial
-----------------------
243435653237
After you base64-decode a certificate, you most likely get a DER-encoded ASN.1 structure of a X.509 v3 certificate (enough keywords to continue searching for an answer).
I am not aware of any PL/SQL implementation of ASN.1 parser, parsing DER-encoded content, but it is possible to learn ASN.1 structures (sequence, integer, etc.) and their binary representation in DER format, and then do the parsing in PL/SQL, byte by byte.
=> The serial number is close to the beginning of the DER-content, so you do not need to support parsing every ASN.1 element to extract the serial number.
You could have to look at X.509 certificate structure/template, explaining how a certificate is constructed from basic ASN.1 elements, then parse/extract the elements and get the info you're interested in.
More detailed description of what's in a certificate: X.509 certificate consists of some data fields like version, serial number, valid from/to dates, issuer DN (distinguished name), subject DN, subject public key, signature hash algorithm, etc. This info is then "signed" by the certificate issuer: the issuer creates a hash code (e.g. using SHA-1 algorithm) from the info mentioned above, and then encrypts it using issuer's private key (RSA encryption). Having issuer's public key and trusting the issuer, one could use the issuer's public key to decrypt the hash code encrypted by the issuer, then create a hash code from certificate details using the same algorithm, and finally compare the computed hash with the one the issuer created. If these match, it means that no one modified the details, so if issuer is trusted, the details found in the certificate can be trusted as well.
X.509 certificate begins with (data types shown to the right):
Certificate SEQUENCE
Data SEQUENCE
Version [0] { INTEGER }
Serial Number INTEGER
Each element starts with a tag byte indicating the element type, followed by the element length, followed by the element content. If the element contains fewer than 128 bytes, the length field requires only one byte to specify the content length. If it is more than 127 bytes, bit 7 of the Length field is set to 1 and bits 6 through 0 specify the number of additional bytes used to identify the content length. In case of X.509 certificate, Version is wrapped in a context-specific tag [0].
Books explaining ASN.1 can be downloaded for free from the web.
Here's an example for analysing the beginning of a certificate:
30 82 02 D7 30 82 02 40 A0 03 02 01 02 02 01 01 ...
Interpretation:
30 = Start of Certificate SEQUENCE
82 = sequence length are the following two bytes
02 D7 = sequence length 0x02D7 (Big Endian order of bytes)
30 = Start of Data SEQUENCE
82 = sequence length are the following two bytes
02 40 = sequence length 0x0240 (Big Endian order of bytes)
A0 = start of context-specific element [0]
03 = length of context-specific element [0]
02 01 02 = content of context-specific element [0] (Version INTEGER)
(02=start of Version INTEGER,
01=length of the integer,
02=Version value (zero-based, so value 02 actually means v3))
02 = Start of Serial Number INTEGER
01 = Length of Serial Number INTEGER
01 = The serial number itself
...
Of course, in your case, length of the serial number may be bigger than one byte shown here.