What is the maximum account name length in NEAR blockchain? - nearprotocol

I'm using account name as a key in various places, and would like to know them maximum account name length to compute storage cost bound.

Maximum length is 64 bytes.
More details: https://nomicon.io/DataStructures/Account

The maximum account name length in the NEAR blockchain is 32 characters

Related

How many adress bits will a memory of n bytes need?

I'm having a hard time translating from one to the other. My problem probably is what a memory is and how it's defined and not the translation itself.
Say I have an adress with 1 bit. So it can be either 0 or 1, 2^1 possibilities. How much data can it hold?
In the case of a 16 bit address. There are 2^16 possibilities, which is 65536. And a 12 bit adress can hold 65536 bytes (64MB). Why is this? Shouldn't it hold 65536 bits?
By the same logic, 1 bit can hold 2 bytes. How can an adress that is either 0 or 1, hold 2 bytes of data?
EDIT: I had already searched for a while and some time after creating this post I came by a post explaining it. Basically 2^n are the number of possible addresses, and each address points to one byte. That's by the number of addresses = numbers of bytes
I had already searched for a while and some time after creating this post I came by a post explaining it. Basically 2^n are the number of possible addresses, and each address points to one byte. That's by the number of addresses = numbers of bytes

How do I compute tag of full associative mapped cache?

I have a full associative mapping cache memory with globally 2^10 lines.
Each line is hosting 16 data.
The address bus a 32 bits.
How many bits are necessary for the TAG?
I think I have to make bus - the offset is it correct? how do I compute the offset?
The correct way is by subtracting to the adress bus the logarithm ofthe block. In this case so is 32-4=28.

What is the maximum payload possible in WSMP?

I searched in google, but I could see data is variable but maximum size is not given.
I could not find in IEEE1609.3 as well
Please help me out
WSM messages have variable length payloads. The theoretical maximum is 4096 bytes because the length field is only 12 bits (4 bits out of 2 bytes are reserved).
However, the recommended variable length of a WSM, including Data, is 1400 bytes as specified in the Annex B of the standard

Improve this compression algorithm?

I am compressing 8 bit bytes and the algorithm works only if the number of unique single bytes found on the data is 128 or less.
I take all the unique bytes. At the start I store a table containing once each unique byte. If they are 120 I store 120 bytes.
Then, instead of storing each item in space of 8 bits, I store each item in 7 bits, one after another. Those 7 bits contain the item's position on the table.
Question: how can I avoid storing those 120 bytes at the start, by storing the possible tables in my code?
What you are trying do is special case of huffman coding where you are only considering unique byte not their frequency hence giving each byte fixed length code but you can do better use their frequency to give them variable length codes using huffman coding and get more compression.
But if you intend to use the same algorithm then consider this way :-
Dont store 120 bytes store 256 bits (32 bytes) where 1 indicate if value is present
because it will give you all info. You use bit to get the values which
are found in the file and construct the mapping tables again
I don't know the exact algorithm, but probably the idea of the compression algorithm is that you cannot. It has to store those values, so it can write a shortcut for all other bytes in the data.
There is one way in which you could avoid writing those 120 bytes: when you know the contents of those bytes beforehand. For example, when you know that whatever you are going to send, will only contain those bytes. Then you can simply make the table known on both sides, and simply store everything but those 120 bytes.

How to calculate Storage when ftping to MainFrame

How can I calculate storage when FTPing to MainFrame? I was told LRECL will always remain '80'. Not sure how I can calculate PRI and SEC dynamically based on the file size...
QUOTE SITE LRECL=80 RECFM=FB CY PRI=100 SEC=100
If the site has SMS, you shouldn't need to, but if you need to calculate the number of tracks is the size of the file in bytes divided by 56,664, or the number of cylinders is the size of the file in bytes divided by 849,960. In either case, you would round up.
Unfortunately IBM's FTP server does not support the newer space allocation specifications in number of records (the JCL parameter AVGREC=U/M/K plus the record length as the first specification in the SPACE parameter).
However, there is an alternative, and that is to fall back on one of the lesser-used SPACE parameters - the blocksize specification. I will assume 3390 disk types for simplicity, and standard data sets.
For fixed-length records, you want to calculate the largest number that will fit in half a track (27994 bytes), because z/OS only supports block sizes up to 32760. Since you are dealing with 80-byte records, that number is 27290. Divide your file size by that number and that will give you the number of blocks. Then in a SITE server command, specify
SITE BLKSIZE=27920 LRECL=80 RECFM=FB BLOCKS=27920 PRI=calculated# SEC=a_little_extra
This is equivalent to SPACE=(27920,(calculated#,a_little_extra)).
z/OS space allocation calculates the number of tracks required and rounds up to the nearest track boundary.
For variable-length records, if your reading application can handle it, always use BLKSIZE=27994. The reason I have the warning about the reading application is that even today there are applications from ISVs that still have strange hard-coded maximum variable length blocks such as 12K.
If you are dealing with PDSEs, always use BLKSIZE=32760 for variable-length and the closest-to-32760 for fixed-length in your specification (32720 for FB/80), but calculate requirements based on BLKSIZE=4096. PDSEs are strange in their underlying layout; the physical records are 4096 bytes, which is because there is some linear data set VSAM code that handles the physical I/O.

Resources