Why 55 AA is used as the boot signature on IBM PCs? [closed] - boot

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Why does the IBM PC architecture use 55 AA magic numbers in the last two bytes of a bootsector for the boot signature?
I suspect that has something to do with the bit patterns they are: 01010101 10101010, but don't know what.
My guesses are that:
BIOS is making some bitwise and/or/xor operations on these bytes to compare them together and if it, for example, results in 0, it can easily detect that and jump somewhere.
it could be some parity/integrity safeguard that if some of these bits is broken, it could be detected or something and still be considered a valid signature to properly boot the system even if this particular bits on the disk has been broken or something.
Maybe someone of you could help me answer this nagging question?
I remember I've once read somewhere about these bit patterns but don't remember where. And it migt be in some paperbook, because I cannot find anything about it on the Net.

I think it was chosen arbitrarily because 10101010 01010101 seemed like a nice bit pattern. The Apple ][+ reset vector was xor'ed with $A5 to (10100101) to produce a check-value. Some machines used something more "specific" for boot validation; for PET-derived machines (e.g. the VIC-20 and Commodore 64 by Commodore Business Machines), a bootable cartridge image which was located at e.g. address $8000 would have the PETASCII string "CBM80" stored at address $8004 (a cart starting at $A000 would have the string "CBMA0" at $A004, etc.), but I guess IBM didn't think disks for any other machine would be inserted and have $55AA in the last two bytes of the first sector.

Related

Minimum machine instructions for C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've built an 8 bit computer out of some ( I mean a tonne ) of wires and 74xx series TTL gates. The computer was slow and it was tedious to program things. I made a small interpreter? I guess that's the correct term for my version of assembly language with an arduino that would read the text file and convert each line into a machine code instruction and then save it into the program memory.
I'd like to do something like that for BASIC or C, but I'm unsure about the minimum machine instructions required for such programming languages, obviously jumps and simple adding and subtracting won't do.
I'd like to know this so I can design and build a 16 bit computer with these instructions.
but I'm unsure about the minimum machine instructions required for such programming languages*
The minimum you need is just an x86-likeMOV instruction with its addressing modes, which is thoroughly demonstrated by the M/o/Vfuscator project. This features a working, usable C compiler which compiles into nothing but MOV instructions. It includes software floating-point support, which is also nothing but MOV instructions.
According to the author, it was inspired by a paper which shows that x86 MOV is Turing Complete. (Though that theoretical claim should be taken with a grain of salt: no machine with fixed addressing is a Universal Turing Machine, unless it has access to an external infinite tape. However, it appears to be the case that the instruction set is no less powerful when reduced down to the MOV subset.)

Implementing LRU in mips [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How LRU can be implemented in MIPS? The procedures that are used take in a lot of initialisations and the requirement of registers is quite high when trying to implement LRU with other functions like sort and other programs that use more variables. How can this issue be addressed?
Very few VM implementations actually use LRU, because of the cost. Instead they tend to use NRU (Not Recently Used) as an approximation. Associate each mapped in page with a bit which is set when that page is used (read from or written to). Have a process that regularly works round the pages in a cyclical order clearing this bit. When you want to evict a page, chose one that does not have this bit set, and so has not been used since the last time the cyclical process got round to it. If you don't even have a hardware-supported "not recently used" bit emulate it by having the cyclical process (this is sometimes known as the clock algorithm) clear the valid bit of the page table and have the interrupt handler for accessing an invalid page set a bit to say the page was referenced before setting the page as valid and restarting the instruction that trapped.
See e.g. http://homes.cs.washington.edu/~tom/Slides/caching2.pptx especially slide 19

What format does Windows make screenshots in? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
So here I am in quite a pickle.
If you make a screenshot in Windows 7, it is presented to you in .png format. The question is, does Windows first create a bitmap screenshot and then without your explicit consent convert it to .png? Or is it made in .png from the start?
Question no. 2:
Why it uses 24-bit format for the image? And is it 1-byte per colour or do those 24 bits include some kind of transparency?
1: It makes .png right away, and even if it didn't I don't see what difference would it make. Format .png is a raster format(bitmap) itself, very similar to .bmp, the only difference is that is can be compressed, but that doesn't erase any usable data in it.
2: Each color takes 1 byte = 8 bits, one byte for each channel, R(ed), G(reen) and B(lue). That sums up into 3 x 8 = 24 bits(not bytes). You can also add one more channel for transparency, usually called Alpha, which would be the 4th byte and then one pixel will have 32 bits.

Tricky Encryption Algorithm Design [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Bob and Alice each have a bit string they want to keep private. They each want to know what the logical AND of their two bit strings would be without telling the other or anyone else their actual bit strings... how can they do this? Keep in mind that even once they both hold the AND of their two bit strings, they should still not be able to calculate the other person's string exactly (unless of course one of their strings was all 1s).
I know that I have seen something similar before in some sort of key system/voting system but I couldn't remember the details. It has to be something like make a private random key, xor it and use that somehow... but I couldn't work out the details. Any clever encryption people out there?
I think that you are looking for homomorphic encryption systems, in which it's possible to do computation on encrypted values without ever exposing what those encrypted values are. This encompasses a far more general set of problems than simply computing bitwise AND.
Hope this helps!

Can I completely disable a PCI-slot in Linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Like many of you, I like to play a game every once in a while. Since I use Linux for my daily tasks (programming, writing papers, browsing, etc.), I solely exploit my graphics-card capabilities in Windows while gaming.
Lately I noticed my energy-bills got really high and I would like to reduce the energy consumption of my computer. My graphics-card uses 110 watt idle, whereas a low-end Radeon HD5xxx only uses 5 watt. I think my computer is powered on 40 hours a week, whereof only 3 hours of gaming. This means I waste 202 kWh a year (!).
I figured I could just buy a DVI splitter and a low-end Radeon-card, and disable the PCI-slot of the high-end card in Linux. I Googled a bit, but I'm not sure which search-terms to use, so I haven't found anything useful.
Too long, didn't read: Is it possible to cut of the power of a PCI-slot using Linux?
No.
What your asking isn't even a "Linux" question, but a motherboard question - is it electrically possible to do this.
The answer is still no.
The only chance you would have, would be to get the spec of the chip/card which is in the slot, and see if there is a bit you can set on it which would "disable" it, or put it into some "low power mode".

Resources