What is the maximum payload possible in WSMP? - 802.11p

I searched in google, but I could see data is variable but maximum size is not given.
I could not find in IEEE1609.3 as well
Please help me out

WSM messages have variable length payloads. The theoretical maximum is 4096 bytes because the length field is only 12 bits (4 bits out of 2 bytes are reserved).
However, the recommended variable length of a WSM, including Data, is 1400 bytes as specified in the Annex B of the standard

Related

How many adress bits will a memory of n bytes need?

I'm having a hard time translating from one to the other. My problem probably is what a memory is and how it's defined and not the translation itself.
Say I have an adress with 1 bit. So it can be either 0 or 1, 2^1 possibilities. How much data can it hold?
In the case of a 16 bit address. There are 2^16 possibilities, which is 65536. And a 12 bit adress can hold 65536 bytes (64MB). Why is this? Shouldn't it hold 65536 bits?
By the same logic, 1 bit can hold 2 bytes. How can an adress that is either 0 or 1, hold 2 bytes of data?
EDIT: I had already searched for a while and some time after creating this post I came by a post explaining it. Basically 2^n are the number of possible addresses, and each address points to one byte. That's by the number of addresses = numbers of bytes
I had already searched for a while and some time after creating this post I came by a post explaining it. Basically 2^n are the number of possible addresses, and each address points to one byte. That's by the number of addresses = numbers of bytes

Addressing Size Regarding Bytes

Just to make sure, does every single address contain one byte? So say you had theoretical addresses FFF0 and FFFF: there are 16 values between these two addresses, which means between them they contain 16 bytes, or 8 x 16 bits? Every individual address is linked to a single byte?
Just to make sure, does every single address contain one byte?
...which means between them they contain 16 bytes, or 8 x 16 bits?
Every individual address is linked to a single byte?
Yes to all three questions.
Which is why the limitation with 32-bit addressing, you can only access 2^32 bytes == 4,294,967,296 bytes == 4 GiB. Each addressable memory location gives access to 1 byte.
If we could access 2 bytes with one address, then that limit would have been 8 GiB. And the architecture of modern chips and all software would have to be modified to determine whether they want both bytes or just the first or the second. So you'd need, say, 1 more bit to determine that. Guess what, if you had 33-bit machines, that's what we'd get...max address-able space of 8 GiB. Which is still effectively 1-byte-containing addresses. Workarounds do exist but that's not related to your questions.
* GiB = Binary GigaBytes.
Note that this is not related to "types" where a char is 1 byte and an int is 4 bytes. Programming languages compensate for that when trying to access the value of a stored variable/data stored at a location(s). And they are actually calculated as total bits rather than total bytes. So an int is considered as 32 bits rather than 4 bytes. When C fetches an int's value from memory, it will fetch all 4 bytes even though the address of the int refers to just one, the address of the first byte.
Yes. Addresses map to bytes 1 to 1, even if they expect you to work with a word size of two or four bytes at a time.

Improve this compression algorithm?

I am compressing 8 bit bytes and the algorithm works only if the number of unique single bytes found on the data is 128 or less.
I take all the unique bytes. At the start I store a table containing once each unique byte. If they are 120 I store 120 bytes.
Then, instead of storing each item in space of 8 bits, I store each item in 7 bits, one after another. Those 7 bits contain the item's position on the table.
Question: how can I avoid storing those 120 bytes at the start, by storing the possible tables in my code?
What you are trying do is special case of huffman coding where you are only considering unique byte not their frequency hence giving each byte fixed length code but you can do better use their frequency to give them variable length codes using huffman coding and get more compression.
But if you intend to use the same algorithm then consider this way :-
Dont store 120 bytes store 256 bits (32 bytes) where 1 indicate if value is present
because it will give you all info. You use bit to get the values which
are found in the file and construct the mapping tables again
I don't know the exact algorithm, but probably the idea of the compression algorithm is that you cannot. It has to store those values, so it can write a shortcut for all other bytes in the data.
There is one way in which you could avoid writing those 120 bytes: when you know the contents of those bytes beforehand. For example, when you know that whatever you are going to send, will only contain those bytes. Then you can simply make the table known on both sides, and simply store everything but those 120 bytes.

What postscript object is 8-bytes normally, but 9-bytes packed?

A packed array in Postscript is supposed to be a space-saving feature, where objects can be squeezed tightly in memory by omitting extraneous information. Like a null can be just a single byte because it carries no information. Booleans could be a signel byte, too. Integers could be 5 (or 3) bytes (if it's a small number). Reference objects would need the full 8-bytes that a normal object does. But the Postscript Manual says that packed objects occupy 1-9 bytes!
When the PostScript language scanner encounters a procedure delimited by
{ … }, it creates either an array or a packed array, according to the current packing
mode (see the description of the setpacking operator in Chapter 8). An
array value occupies 8 bytes per element. A packed array value occupies 1 to 9
bytes per element, depending on each element’s type and value; a typical average
is 2.5 bytes per element. --PLRM 3ed, B.2. Virtual Memory Use, p. 742
So what object gets bigger when packed? And Why? Hydrogen-bonding??!
Any value that you can't represent in 7 bytes or less, will need 9 bytes.
The packed format starts with a byte that contains how many data bytes follow, so any value that needs all 8 bytes of data will be 9 bytes including the leading length byte.

UUencode checksum fault

I have a stream of data that im attempting to encode with UUencode, in order to pass the data on to an external chip. The chip accepts 512 bytes of raw data at once. I encode 512 bytes with UUencode.
As far as i understand, the data should be converted into 11 lines of 45 bytes (which will be 60 bytes after encoding) and 1 remaining line of 17 bytes.
Obviously the 17 bytes cant map directly to uuencoded segments as it isnt a multiple of 3, yet when i get the uuencoded data back, the final line returns 24 encoded bytes (or 18 raw bytes).
This means that i now have 513 bytes of data in total. My question is, is this a fault with my uuencode algorithm (although from a purely mathematical perspective i cant see how it can be) or alternatively, where does the extra byte come from, and how do i get rid of it again?
UUEncoding 512 bytes will get you 684 encoded bytes (not 513). An input data stream of length 384 bytes will encode to exactly 512 bytes.
UUEncoding is simply a means to transform a 3 binary byte input data segment into a 4 text byte output data segment. Any input segment that is not 3 bytes long is padded with null bytes until it is. The UUEncoding algorithm has no representation for the original data length.
Contrast this with UUEncoded files which format and add information to the data stream by breaking the stream into lines of a specific length and add a line length indicator to the front of each encoded line of data. In your example, your 17 final bytes would be encoded to 24 bytes, but this line of data would be preceded by a byte that gives the length of the line as 17 instead of 18.
The only way to get rid of the padding is to know it is there in the first place by encoding the length of the data.

Resources