MAC (message authentication code) standard size - macos

Which is the standard size of the MAC generated with SHA-2?
it is safe if I trunk the message and then send it to destination ?
Thank you in advance for your consideration

First of all, do not use only the SHA2 hash funcion, but use HMAC (with SHA2).
Now for answering your question: In theory, you can use any size for your MAC, depending on your security requirements. However, the most recommended standard tag size is 128 bits (16 bytes).

Related

Generic option bytes on STM32F4

I'm currently tuning some code written for an STM32F070, where we use one byte on user option byte to keep some flags between Resets, using:
FLASH_ProgramOptionByteData(ADRESSE_OPTION_BYTE_DATA0, status_to_store);
Reading carefully the datasheet from our new STM32F446 lets me think that it is no longer possible to use option bytes to store one user byte...
1 - Am I Right with this assertion ? If not, what did I miss ?
2 - Is there some workaround, not involving to rewrite a sector of the flash ?
I am not an expert on stm32, rather still a beginner, but maybe you could have a look on the RTC backup register to hold your data ?

Windows DHCP client hostname encoding

Recently I have been trying to save list of hostnames from captured DHCP packets. I have found out, every DHCP hostname (option 12) should have form defined in RFC 1035. So if I understand it correctly, hostname should be encoded in 7-bit ASCII and have other restrictions like:
- name should not start with digit and should omit some forbidden characters.
Almost every device I have encountered in packets fulfill this constraint, but not Windows devices (Vendor ID MSFT 5.0). IMHO Windows DHCP client takes computer (mobile) name and fill it in hostname option.
Problem occurs, when computer name is set for example to "Lukáš-PC". Wireshark display this hostname as Luk\240\347-PC. (240 and 347 are numbers in octal). To see for myself I have printed values in packets with printf("%hhu", c) (C language).
á = 160
š = 231
IMHO I think this is simple char variable overflow. I tried deduce original value from overflow value, but I haven't found any relation between character and known encodings. So my questions are:
Is there any way to convert these values back to original?
If yes, what was original character encoding, when overflow happened?
Thanks.
Default char is usually signed, and extends to int when passed to a variadic function. To ensure that it is printed unsigned, use printf("%hhu", c) or printf("%d", (unsigned char)c);.
The correct encoding is impossible to know because it depends on each system's settings.
Note that any compliant systems MUST encode names according to RFC 3490, but Windows seems to enjoy violating standards.
The characters á and š that you are seing are encoded using code page 852 (Latin-2 - Central European languages).
Unfortunately there is no simple way how you can figure out the encoding used only by looking at DHCP requests. In principle the DHCP client can use any code page it wants. If you are working in a private/controlled network, then it is probably safe to assume the all clients are using the same code page and explicitly encode the strings using that particular code page.

9 bit protocol on UART in embedded Linux

I am trying to force a 9-bit protocol on a UART in embedded Linux. Currently I am testing this out on the am335x_evm board. I am planning on doing this using stick parity. Ideally I was hoping I would not need to actually modify any of the code for the omap-serial.c driver.
The reason for the 9-bit protocol is to support some legacy hardware that uses it. The parity bit needs to be 1 for the address portion of the message, 0 for the data portion, then 1 again for the termination byte.
I was planning on having a process running in user space that would interface with the UART through standard system calls (open, write, read, ioctl, tcsetattr, etc). I would configure the UART to enable parity and set the stick parity. I would then set the parity to even and call write() to send out my address data. I would then set the parity to 0 and send out the data. My concern is if I change the parity from 1 to 0, when does that take affect? If the UART is not done sending all the address data, will the change in parity apply to any unsent data?
Ended up writing my own 9-bit uart driver. Was the easiest and most efficient solution.
Proper way is to set cs9 on your serial port (and possibly no parity), provided that hardware and driver support this.
I'll search for a link for you...

Discovering maximum packet size

I'm working on a network-related project and I am using DTLS (TLS/UDP) to secure communications.
Reading the specifications for DTLS, I've noted that DTLS requires the DF flag (Don't Fragment) to be set.
On my local network if I try to send a message bigger than 1500 bytes, nothing is sent. That makes perfect sense. On Windows the sendto() reports a success but nothing is sent.
I obviously cannot unset the DF flag manually since it is mandatory for DTLS and i'm not sure whether the 1500 bytes limit (MTU ?) could change in some situations. I guess it can.
So, my question is : "Is there a way to discover this limit ?" using APIs ?
If not, what would be the lowest possible value ?
My software runs under UNIX (Linux/MAC OSX) and Windows OSes so different solutions for each OS are welcome ;)
Many thanks.
There is a minimum MTU that must be supported - 576 bytes, including IP headers. So if you keep your packets below that, you don't have to worry about PMTU-D (that's what DNS does).
you probably need to 'auto tune' it by sending a range of packet sizes to the target, and see which arrive. think binary_search ...

Using Win32 Crypto API

I can't find any help to implement PROV_RSA_AES CSP in c++. is there any article or book to help me out with it?
Here is an article about it.
Here is another one.
i just want to use one, i figured
how to get context but i'm still
thinking about the size of buffer i
need to use for CryptEncrypt() to get
it working with aes256 ? i also want
to use random salt.
AES256 in CBC-mode with PKCS#7-padding (which is the default) will need a buffersize that is the input-data rounded up to the next multiple of 16 (but always at least one byte more). Ie. 35 -> 48, 52 -> 64, 80 -> 96.
There is no salt involved in AES256. Are you talking about key-derivation? Or do you mean the IV?

Resources