1 byte is equal to 8 bits. What is the logic behind this? [closed] - byte

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 7 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
Why not 4 bits, or 16 bits?
I assume some hardware-related reasons and I'd like to know how 8bit 1byte became the standard.

I'ts been a minute since I took computer organization, but the relevant wiki on 'Byte' gives some context.
The byte was originally the smallest number of bits that could hold a single character (I assume standard ASCII). We still use ASCII standard, so 8 bits per character is still relevant. This sentence, for instance, is 41 bytes. That's easily countable and practical for our purposes.
If we had only 4 bits, there would only be 16 (2^4) possible characters, unless we used 2 bytes to represent a single character, which is more inefficient computationally. If we had 16 bits in a byte, we would have a whole lot more 'dead space' in our instruction set, we would allow 65,536 (2^16) possible characters, which would make computers run less efficiently when performing byte-level instructions, especially since our character set is much smaller.
Additionally, a byte can represent 2 nibbles. Each nibble is 4 bits, which is the smallest number of bits that can encode any numeric digit from 0 to 9 (10 different digits).

Related

How to figure out if a number is present in a very large dataset [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was wondering as to what would be a suitable answer to the question "Given a very large set of numbers write a service that will return back if a number is present within 500 ms". There would be trillions of numbers. This question was supposed to test my knowledge of scalability and architecture. I answered I would break up the set of numbers into multiple buckets and assign a set to a specific server, very much like a HashMap dividing up it's keys into buckets. In each server, the server would maintain something like a bit array which would mark out if a number is present. He asked me what what if the numbers are very sparse, in which case I will use a balanced binary search tree like red black or AVL tree. I guess there would be multiple solutions to this problem. I was wondering as to what would be the other answers?
A trillion is 10^12. Size of a bigint is 8 bytes. So you have 10^12 * 8 bytes = 7.27 terrabytes.
Now you can easily buy a 8TB disc for 500$ and it is not hard to buy a disc for 16TB. So you can just store all of them on one machine and no need to have a fancy multi-machine stuff. Then you just sort all of them (will take O(n * log n) which is approximately 2.8 * 10^13 operations.
On my machine a Go program can execute approximately 10^9 operations in 0.6 seconds, so nothing stops a C program to sort this many integers in 5 hours. This is only done once. Now to return a number you have to do log 10^12 operations which is less than 50 disk seeks which would be done in microseconds.

Bit / Byte Confusion on Binary Digits [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
So I Heard that one character is a byte. But then I went to a website which measures text capacity,(Making website) and I typed in a 1. It showed on the byte count 1 byte. 8 bits = 1 byte, right? I thought 8 binary digits made 1 character. How can a 1 be a byte and a bit at the same time?
These days, the number of bits per character is not so simple but there are most definitely 8 bits per byte.
The number of bits/bytes per character will vary depending on your system and the character set.
Original ASCII was 7bits per character.
That was extended to 8bits per character and stayed there for a very long time.
These days, with Unicode, you have variants that are 8, 16 and 32 bits (1, 2 and 4 bytes) wide. Were the 8 bit variety corresponds to ASCII.
The character 1 is represented by binary 00110001b which is 8 bits thus 1byte. The actual binary number 1b would be at a minimum 00000001b which represents a special character not the character 1.
Hopefully this helps.

How many permutations of an alphanumeric string of length 32 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Considering using the characters (elements) a-z and 0-9 how many different permutations are there when using a string of length 32?
The elements can be used more than once, but the results have to be unique (as is the case with a permutation).
I've looked at WolframAlpha however that doesn't state how many 'elements' that would be used, it only considers the length.
You have 32 positions for either 10 different digits or 26 characters. In each position either goes a character or a digit so you have 36 possibilities. This leave us with:
36*36*36...*36 (32 times)
= 36^32
= 63340286662973277706162286946811886609896461828096 # (thanks Python ;) )
The answer is (26+10)^32 = 6.3340287e+49
Well, it depends on if you're allowed to have replacement or not.
If you are allowed replacement, you have 36 possibilities for each character position = 36^32.
If you're not allowed replacement, you have 36 for the first, 35 for the second, etc, until you run out of character positions. That's 36! / 4!, also written as 36 P 32.

Modbus RTU - 3.5 chars time [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am new in Modbus and developing an application using Modbus RTU. I would like to know how to find out the RTU message frame separation time. In the Modbus RTU specification, It mentions 3.5 chars time, but there is no more data about how i can decide this intervals. Any idea?
It depends on your serial port settings. Based on speed, number of data bits in each character, parity and stop bits, you can calculate time length of 3.5 chars. Calculation is explained here.
Modbus RTU specification also mentions 1.5 character times as maximum silent interval between message bytes. Beyond this, incomplete message is flushed and next byte will be the address of a new message.

Why 55 AA is used as the boot signature on IBM PCs? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Why does the IBM PC architecture use 55 AA magic numbers in the last two bytes of a bootsector for the boot signature?
I suspect that has something to do with the bit patterns they are: 01010101 10101010, but don't know what.
My guesses are that:
BIOS is making some bitwise and/or/xor operations on these bytes to compare them together and if it, for example, results in 0, it can easily detect that and jump somewhere.
it could be some parity/integrity safeguard that if some of these bits is broken, it could be detected or something and still be considered a valid signature to properly boot the system even if this particular bits on the disk has been broken or something.
Maybe someone of you could help me answer this nagging question?
I remember I've once read somewhere about these bit patterns but don't remember where. And it migt be in some paperbook, because I cannot find anything about it on the Net.
I think it was chosen arbitrarily because 10101010 01010101 seemed like a nice bit pattern. The Apple ][+ reset vector was xor'ed with $A5 to (10100101) to produce a check-value. Some machines used something more "specific" for boot validation; for PET-derived machines (e.g. the VIC-20 and Commodore 64 by Commodore Business Machines), a bootable cartridge image which was located at e.g. address $8000 would have the PETASCII string "CBM80" stored at address $8004 (a cart starting at $A000 would have the string "CBMA0" at $A004, etc.), but I guess IBM didn't think disks for any other machine would be inserted and have $55AA in the last two bytes of the first sector.

Resources