Implementing LRU in mips [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How LRU can be implemented in MIPS? The procedures that are used take in a lot of initialisations and the requirement of registers is quite high when trying to implement LRU with other functions like sort and other programs that use more variables. How can this issue be addressed?

Very few VM implementations actually use LRU, because of the cost. Instead they tend to use NRU (Not Recently Used) as an approximation. Associate each mapped in page with a bit which is set when that page is used (read from or written to). Have a process that regularly works round the pages in a cyclical order clearing this bit. When you want to evict a page, chose one that does not have this bit set, and so has not been used since the last time the cyclical process got round to it. If you don't even have a hardware-supported "not recently used" bit emulate it by having the cyclical process (this is sometimes known as the clock algorithm) clear the valid bit of the page table and have the interrupt handler for accessing an invalid page set a bit to say the page was referenced before setting the page as valid and restarting the instruction that trapped.
See e.g. http://homes.cs.washington.edu/~tom/Slides/caching2.pptx especially slide 19

Related

Do any computer languages not use a stack? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Do any computer languages not use a stack data structure to keep track of execution progress?
Or is the use of this data structure an emergent requirement stemming from something inherent to most computer languages or turing machines?
With a traditional "C-style" stack, certain language features are difficult or impossible to implement. For example, closures can't easily be implemented with a traditional stack because closures require a pointer to an old activation record to work correctly and that memory is automatically reclaimed in a C-style stack. As another example, generators and coroutines need their own memory to store local variables and relative offset information and therefore can't easily be implemented if you use a standard stack implementation.
Hope this helps!

Website Performance Issue [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
If a website is experiencing performance issues all of a sudden, what can be the reasons behind it?
According to me database can one reason or space on server can be one of few reasons, I would like to know more about it.
There can be n number of reasons and n depends on your specification
According to what you have specified you can have a look at,
System counters of webserver/appserver like cpu, memory, paging, io, disk
What changes you did to application if any, were those changes performance costly i.e. have a round of analysis on those changes to check whether any improvement is required.
If system counters are choking then check which one is bottleneck and try to resolve it.
Check all layers/tiers of application i.e. app server, database, directory etc.
if database is bottleneck then identify costly queries and apply indexes & other DB tuning
If app server is choking then, you need to identify & improve the method which is resource heavy.
Performance tuning is not a fast track process, it takes time, identify bottlenecks and try to solve it and repeat the process until you get desired performance.

computer architecture cache pollution [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I read from the Wikipedia is that cache pollution occurs when we access some data once and after that we do not use that data and since precious cache space occupied by such data. Some useful data is evicted in replacement.
is my understanding correct or am I missing something? Can I get more information on cache pollution?
Thanks.
Most cache memories use the last recently used replacement algorithm, i.e. they replace data in the cache that were not used longest. So if you fill the whole cache memory with new data, the data loaded earliest will be replaced, even if they will be used again, and the data loaded later not.
It therefore makes sense to keep the functionality of a cache memory in mind, if data intensive algorithms are developed.
I don't know which Wikipedia article you have read, but here is a good example.

Why is RAII so named? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The sense I get about this idiom is that it is useful because it ensures that resources are released after the object that uses them goes out of scope.
In other words, it's more about de-acquisition and de-initialisation, so why is this idiom named the way it is?
First, I should note that it's widely considered a poorly named idiom. Many people prefer SBRM, which stands for Stack Bound Resource Management. Although I (grudgingly) go along with using "RAII" simply because it's widely known and used, I do think SBRM gives a much better description of the real intent.
Second, when RAII was new, it applied as much to the acquisition as releasing of resources. In particular, at the time it was fairly common to see initialization happen in two steps. You'd first define an object, and only afterwards dynamically allocate any resources associated with that object. Many style guides advocated this, largely because at that time (before C++ had exception handling) there was no good way to deal with failure in a constructor. Therefore, the style guides often said, constructors should do only the bare minimum of work, and specifically avoid anything that was open to failure -- especially allocating resources (and a few still say things like that).
Quite a few of those already handled releasing the resources in the destructor though, so that wouldn't have been as clear a distinction from previous practice.

Windows Task Manager Columns - Handles [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the Windows Task Manager "Handles" column a measure of? File Handles? Or Page File Pointers? Also is it bad for one program to have 8000 handles?
It's a measure of kernel handles. Kernel handles types and the functions that create them include:
File handles (CreateFile)
Memory mapped files (CreateFileMapping)
Events (CreateEvent)
Mutexes (CreateMutex)
Semaphores (CreateSemaphore)
Processes (CreateProcess)
Threads (CreateThread)
And more than I forget or have never heard of.
8000 for a single process seems incredibly excessive.
8000 for a single process does seem rather a lot, but not necessarily out of the question - it depends on the behaviour. You should think of handles as a special kind of memory - high usage is a possible warning sign, but not if it is stable. If the handle usage is stable, then it is not a sign of a leak, although you might have some optimisation to perform to get it to use fewer handles.

Resources