Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've built an 8 bit computer out of some ( I mean a tonne ) of wires and 74xx series TTL gates. The computer was slow and it was tedious to program things. I made a small interpreter? I guess that's the correct term for my version of assembly language with an arduino that would read the text file and convert each line into a machine code instruction and then save it into the program memory.
I'd like to do something like that for BASIC or C, but I'm unsure about the minimum machine instructions required for such programming languages, obviously jumps and simple adding and subtracting won't do.
I'd like to know this so I can design and build a 16 bit computer with these instructions.
but I'm unsure about the minimum machine instructions required for such programming languages*
The minimum you need is just an x86-likeMOV instruction with its addressing modes, which is thoroughly demonstrated by the M/o/Vfuscator project. This features a working, usable C compiler which compiles into nothing but MOV instructions. It includes software floating-point support, which is also nothing but MOV instructions.
According to the author, it was inspired by a paper which shows that x86 MOV is Turing Complete. (Though that theoretical claim should be taken with a grain of salt: no machine with fixed addressing is a Universal Turing Machine, unless it has access to an external infinite tape. However, it appears to be the case that the instruction set is no less powerful when reduced down to the MOV subset.)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I studied how to optimize algorithms for multiprocessor systems. Now I would understand in main lines how these algorithms can be transformed into code.
I know that exist some libraries MPI based that helps the developement of software portable to different type of systems, but is right the word "portable" that makes me confused: how the program can be authomatically adapted to an arbitrary number of processors at runtime, since this is an option of mpirun? How the software can decide the proper topology (mesh, hypercube, tree, ring, etc)? The programmer can specify the preferred topology through MPI?
you start the application with a fixed number of cores. Thus, you cannot automatically adapted to an arbitrary number of processors at runtime.
You can tune your software to the topology of your cluster. This is really advanced and for sure not portable. It only makes sense if you have a fixed cluster and are striving for the last bit of performance.
Best regards, Georg
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Can an Operating System be considered as an algorithm? Focus on the finiteness property, please. I have contradicting views on it right now with one prof. telling me something and the other something else.
The answer depends on picky little details in your definition of word 'algorithm', which are not relevant in any practical context.
"Yes" is a very good answer, since the OS is an algorithm for computing the next state of the kernel given the previous state and a hardware or software interrupt. The computer runs this algorithm every time an interrupt occurs.
But if you are being asked to "focus on the finiteness property", then whoever is asking probably wants you to say "no", because the OS doesn't necessarily ever terminate... (except that when you characterize it as I did above, it does :-)
By definition an Operation System can not be called an algorithm.
Let us take a look at what an algorithm is:
"a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer."
The Operating System is composed by set of rules (in the software coding itself) which allow the user to perform tasks on a system but is not defined as a set of rules.
With this said, the Operating System itself is not an algorithm, but we can write an algorithm on how to use it. We can also write algorithms for Operating Systems, defining how it should work, but to call the Operating System itself an algorithm does not make much sense. The Operating System is just a piece of software like any other, though considerably bigger and complex. The question is, would you call MS Word or Photoshop an algorithm?
The Operating System is, however, composed of several algorithms.
I'm sure people will have deferring views on this matter.
From Merriam-Webster: "a procedure for solving a mathematical problem ... in a finite number of steps that frequently involves repetition of an operation". The problem with an OS, is even if you are talking about a fixed distribution, so that it can consist of a discrete step-by-step procedure, it is not made for solving "a problem". It is made for solving many problems. It consists of many algorithms, but it is not a discrete algorithm in and of itself.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How LRU can be implemented in MIPS? The procedures that are used take in a lot of initialisations and the requirement of registers is quite high when trying to implement LRU with other functions like sort and other programs that use more variables. How can this issue be addressed?
Very few VM implementations actually use LRU, because of the cost. Instead they tend to use NRU (Not Recently Used) as an approximation. Associate each mapped in page with a bit which is set when that page is used (read from or written to). Have a process that regularly works round the pages in a cyclical order clearing this bit. When you want to evict a page, chose one that does not have this bit set, and so has not been used since the last time the cyclical process got round to it. If you don't even have a hardware-supported "not recently used" bit emulate it by having the cyclical process (this is sometimes known as the clock algorithm) clear the valid bit of the page table and have the interrupt handler for accessing an invalid page set a bit to say the page was referenced before setting the page as valid and restarting the instruction that trapped.
See e.g. http://homes.cs.washington.edu/~tom/Slides/caching2.pptx especially slide 19
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I love using Google for quick back-of-the-envelope calculations. For instance, if I want to know the approximate weight of a carbon-12 ion with charge state 4, I search for
12 u -4*electron mass in u
and get the answer 11.9978057 atomic mass units. More complex things, such as the cyclotron frequency of this ion in some magnetic field, are just as easy:
1/(2*pi)*4* (elementary charge)/(12 u - 4*(electron mass)) * 5.1125 Tesla
This returns the correct answer 26.174171 MHz. The fact that I can enter 12u - 4*(electron mass) and Google converts the units on the fly, is really helpful to me. WolframAlpha can do even more, but Google is a lot quicker and does not ask for a subscription after my nth query.
As an offline solution, I used a Matlab script in which I had most constants defined, but Matlab takes 30 sec to 1 min to start up, which is frustrating. Mathematica is not much faster to start up, either. Also, for technical reasons I have to use network licenses, so these programs are not offline solutions anymore. I switched to Excel (which loads quite fast), where I have a sheet that used named ranges. This is semi-convenient, but it just feels wrong.
Is there any lightweight Windows program that provides this functionality offline?
You can use the Units program that was originally developed for UNIX. There is a native Windows port that is based on version 1.87 (2008). The current version of the UNIX tool is 2.01 (2012).
Units was originally designed to do simple unit conversion, but it also supports evaluating mathematical expressions. It requires you to specify the unit of the output and gives you two lines as a result: The result that you want is the first line, the second line is the inverse of the result.
This program has three major shortcomings when compared to the Google math expression evaluation:
You have to know the unit that you want to get in advance. (I don't always know it, and sometimes I just don't care. Often this unit is "1", as for the result of the calculation sin(pi).)
It does not tell you how it interpreted the units that you entered. Google always returns a parsed version of the input string, so that you can see where Google misunderstood you.
It is quite strict when it comes to variable names. Multi-word names are not permitted, so electron mass is called electronmass (m_e also works).
The installer.exe is easy enough to use, but on my Windows XP machine it did not set the path variables of the command line correctly. I set up a simple shortcut on my Desktop that points to: C:\Programs\GnuWin32\bin\units.exe.
Overall, Units is a nice and quick calculator that starts up a few thousand times faster than Matlab or Mathematica - but the user interface has some shortcomings.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Why does the IBM PC architecture use 55 AA magic numbers in the last two bytes of a bootsector for the boot signature?
I suspect that has something to do with the bit patterns they are: 01010101 10101010, but don't know what.
My guesses are that:
BIOS is making some bitwise and/or/xor operations on these bytes to compare them together and if it, for example, results in 0, it can easily detect that and jump somewhere.
it could be some parity/integrity safeguard that if some of these bits is broken, it could be detected or something and still be considered a valid signature to properly boot the system even if this particular bits on the disk has been broken or something.
Maybe someone of you could help me answer this nagging question?
I remember I've once read somewhere about these bit patterns but don't remember where. And it migt be in some paperbook, because I cannot find anything about it on the Net.
I think it was chosen arbitrarily because 10101010 01010101 seemed like a nice bit pattern. The Apple ][+ reset vector was xor'ed with $A5 to (10100101) to produce a check-value. Some machines used something more "specific" for boot validation; for PET-derived machines (e.g. the VIC-20 and Commodore 64 by Commodore Business Machines), a bootable cartridge image which was located at e.g. address $8000 would have the PETASCII string "CBM80" stored at address $8004 (a cart starting at $A000 would have the string "CBMA0" at $A004, etc.), but I guess IBM didn't think disks for any other machine would be inserted and have $55AA in the last two bytes of the first sector.