Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
So I Heard that one character is a byte. But then I went to a website which measures text capacity,(Making website) and I typed in a 1. It showed on the byte count 1 byte. 8 bits = 1 byte, right? I thought 8 binary digits made 1 character. How can a 1 be a byte and a bit at the same time?
These days, the number of bits per character is not so simple but there are most definitely 8 bits per byte.
The number of bits/bytes per character will vary depending on your system and the character set.
Original ASCII was 7bits per character.
That was extended to 8bits per character and stayed there for a very long time.
These days, with Unicode, you have variants that are 8, 16 and 32 bits (1, 2 and 4 bytes) wide. Were the 8 bit variety corresponds to ASCII.
The character 1 is represented by binary 00110001b which is 8 bits thus 1byte. The actual binary number 1b would be at a minimum 00000001b which represents a special character not the character 1.
Hopefully this helps.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 7 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
Why not 4 bits, or 16 bits?
I assume some hardware-related reasons and I'd like to know how 8bit 1byte became the standard.
I'ts been a minute since I took computer organization, but the relevant wiki on 'Byte' gives some context.
The byte was originally the smallest number of bits that could hold a single character (I assume standard ASCII). We still use ASCII standard, so 8 bits per character is still relevant. This sentence, for instance, is 41 bytes. That's easily countable and practical for our purposes.
If we had only 4 bits, there would only be 16 (2^4) possible characters, unless we used 2 bytes to represent a single character, which is more inefficient computationally. If we had 16 bits in a byte, we would have a whole lot more 'dead space' in our instruction set, we would allow 65,536 (2^16) possible characters, which would make computers run less efficiently when performing byte-level instructions, especially since our character set is much smaller.
Additionally, a byte can represent 2 nibbles. Each nibble is 4 bits, which is the smallest number of bits that can encode any numeric digit from 0 to 9 (10 different digits).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
So here I am in quite a pickle.
If you make a screenshot in Windows 7, it is presented to you in .png format. The question is, does Windows first create a bitmap screenshot and then without your explicit consent convert it to .png? Or is it made in .png from the start?
Question no. 2:
Why it uses 24-bit format for the image? And is it 1-byte per colour or do those 24 bits include some kind of transparency?
1: It makes .png right away, and even if it didn't I don't see what difference would it make. Format .png is a raster format(bitmap) itself, very similar to .bmp, the only difference is that is can be compressed, but that doesn't erase any usable data in it.
2: Each color takes 1 byte = 8 bits, one byte for each channel, R(ed), G(reen) and B(lue). That sums up into 3 x 8 = 24 bits(not bytes). You can also add one more channel for transparency, usually called Alpha, which would be the 4th byte and then one pixel will have 32 bits.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Considering using the characters (elements) a-z and 0-9 how many different permutations are there when using a string of length 32?
The elements can be used more than once, but the results have to be unique (as is the case with a permutation).
I've looked at WolframAlpha however that doesn't state how many 'elements' that would be used, it only considers the length.
You have 32 positions for either 10 different digits or 26 characters. In each position either goes a character or a digit so you have 36 possibilities. This leave us with:
36*36*36...*36 (32 times)
= 36^32
= 63340286662973277706162286946811886609896461828096 # (thanks Python ;) )
The answer is (26+10)^32 = 6.3340287e+49
Well, it depends on if you're allowed to have replacement or not.
If you are allowed replacement, you have 36 possibilities for each character position = 36^32.
If you're not allowed replacement, you have 36 for the first, 35 for the second, etc, until you run out of character positions. That's 36! / 4!, also written as 36 P 32.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In PC properties (i.e. My Computer -> Properties), what does B stand for in MB or GB. Does it define bit or byte?
Byte. bit is lowercase. MB = Megabyte, Mb is megabit.
Byte for B,
bit for b.
This is international standard.
Upper case B always means Byte, and lower case means bit.
For example when you see internet speeds listed in MB/s, that refers to MegaBytes per second, and when you see it listed as Mb/s that refers to Megabits per second.
The letter "B" stands for "byte".
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Why does the IBM PC architecture use 55 AA magic numbers in the last two bytes of a bootsector for the boot signature?
I suspect that has something to do with the bit patterns they are: 01010101 10101010, but don't know what.
My guesses are that:
BIOS is making some bitwise and/or/xor operations on these bytes to compare them together and if it, for example, results in 0, it can easily detect that and jump somewhere.
it could be some parity/integrity safeguard that if some of these bits is broken, it could be detected or something and still be considered a valid signature to properly boot the system even if this particular bits on the disk has been broken or something.
Maybe someone of you could help me answer this nagging question?
I remember I've once read somewhere about these bit patterns but don't remember where. And it migt be in some paperbook, because I cannot find anything about it on the Net.
I think it was chosen arbitrarily because 10101010 01010101 seemed like a nice bit pattern. The Apple ][+ reset vector was xor'ed with $A5 to (10100101) to produce a check-value. Some machines used something more "specific" for boot validation; for PET-derived machines (e.g. the VIC-20 and Commodore 64 by Commodore Business Machines), a bootable cartridge image which was located at e.g. address $8000 would have the PETASCII string "CBM80" stored at address $8004 (a cart starting at $A000 would have the string "CBMA0" at $A004, etc.), but I guess IBM didn't think disks for any other machine would be inserted and have $55AA in the last two bytes of the first sector.