Fetch steps and execute steps - cpu

Recently I am doing some self-study about CPU. And I find a exercise question from the book, since the structure of the book is messed up, I am not able to solve this question.
Instruction, Sub[60][61],copies number stored in address 60 to address 61. Assume instruction is stored at address 70, show the fetch and execute steps.
Please give me a hand.

Related

How does the enhanced second-chance algorithm has a preference to the changes that have been modified?

I have been reading the textbook "Operating System Concepts, Edition 10 by Greg Gagne, Peter B. Galvin, Abraham Silberschatz"
The textbook first says the following about a modified bit on page 403 ...
"If the bit is set,we know that the page has been modified since it was read in from secondary storage. In this case, we must write the page to storage. If the modify bit is not set, however, the page has not been modified since it was read into memory. In this case, we need not write the memory page to storage: it is already there."
However later in the book, page 410-411 it seems to contradict...
"We can enhance the second-chance algorithm by considering the reference bit
and the modify bit (described in Section 10.4.1) as an ordered pair. With these
two bits, we have the following four possible classes:
1. (0, 0) neither recently used nor modified—best page to replace
2. (0, 1) not recently used but modified—not quite as good, because the page
will need to be written out before replacement
3. (1, 0) recently used but clean—probably will be used again soon
4. (1, 1) recently used and modified—probably will be used again soon, and
the page will be need to be written out to secondary storage before it can
be replaced
Each page is in one of these four classes. When page replacement is called
for, we use the same scheme as in the clock algorithm; but instead of examining
whether the page to which we are pointing has the reference bit set to 1,
we examine the class to which that page belongs. We replace the first page
encountered in the lowest nonempty class. Notice that we may have to scan the
circular queue several times before we find a page to be replaced. The major
difference between this algorithm and the simpler clock algorithm is that here
we give preference to those pages that have been modified in order to reduce
the number of I/Os required."
If we are giving preference to pages that have been modified does that not mean that we are increasing the number of IO required? Because if the page has been modified then we need to write that change into the storage?
Sorry I'm confused how the enhanced second-chance algorithm is supposed to reduce the number of IO required.
Thank you.
What I think they mean is the preference is to retain the modified pages in memory (although it reads somewhat confusingly to me).
This sentence:
We replace the first page encountered in the lowest nonempty class
And the ordering of the classes above it means that it would first discard a page that has not been used and not modified.
I would also read "in order to reduce the number of I/Os required" to be applied to the context of immediately needing to solve the issue of not having enough free memory. Usually this issue needs to be resolved as quickly as possible. When there's no write required, that's the quickest. If the OS can free enough pages from the first class to solve the immediate need, then that's a really good outcome.
Often OSs have a "lazy write" or "background write" process. The memory pages with the modified bit set are written to disk 'later' when the resources free up (or there are no unmodified pages available and more RAM is required). This is a major reason for shutting down properly and not pulling the power plug - there are probably many modified pages in memory waiting to be written to disk. So in the context of this paragraph, if the current need can be satisfied by releasing pages that haven't been modified, then the remaining pages may get written by the background write process at a later point.
It's interesting that class 2 is a higher preference than class 3. It would be quicker to solve the immediate problem (need more memory) by discarding something from class 3, because a write is not required. However the OS is predicting that it would be likely to need to read that memory again fairly shortly, and so it does not. There's probably some extremely complex algorithms involved in calculating just how recent "recently used" is. And it probably takes into account speed of the media that is being written to.

Reverse engineer an algorithm

I have an algorithm that uses time, and other variables to create keys that are 30 characters in length.
The FULL results look like this: d58fj27sdn2348j43dsh2376scnxcs
Because it uses time, and other variables, it will always change.
All you see is the first 6: d58fj2
So basically, if you monitor my results, each time you would see different once, and only the first 6:
d58fj2
kfd834
n367c6
9vh364
vc822c
c8u23n
df8m23
jmsd83
QUESTION: Would you ever be able to reverse engineer and figure out the algorithm calculating them? REMEMBER, you NEVER see the full result, just the first 6 digits out of 30.
QUESTION 2: To those saying it's possible, how many keys would you need in order to do that? (And I still mean, just the first 6 digits out of the 30)
Thanks
I'm willing to risk the downvotes, but somehow this quickly started smelling like school (home)work.
The question itself - "Is this reverse-engineerable? REMBER, you never see the full result" is suspicious enough; if you can see the full result, so can I. Whether you store it locally so i can take my time inspecting it, or whether it goes thru the wire so i have to hunt it down is another matter - having to use wireshark or not, I can still see what's being transmitted to and from the app.
Remember, at some point WEP used to be "unbreakable" while now alot of lowend laptops can crack them easily.
The second question however - "how many samples would you need to see to figure it out" sounds like one of those dumb impractical teacher questions. Some people guess their friends' passwords on first try, some take a few weeks... The number of tries, unfortunatelly, isn't the deciding factor in reverse-engineering. Only having the time to try them all is; which is why people use expensive locks on their doors - because they're not unbreakable, but because it takes more than a few seconds to break them which increases the chances that the neighbours will see suspicious activity.
But asking the crowd "how many keys would you need to see to crack this algorithm you know nothing about" leads nowhere, as it's merely a defensive move that does not provide any guarantees; the author of the algorithm very well knows how many samples one needs to break it using statistical analysis. (in case of WEP, that's anywhere between 5000 - 50000 - 200000 IVs). Some passwords break down with 5k, some hardly breaking with 200k...
Answering your questions in more detail with academic proof requires more info from your side; much more than the ambiguous "can you do it, and if yes, how long would it take?" question which is what it currently is.

Vehicle usage optimization using GTFS

I have a GTFS feed defined for my fleet. This tells the routes, trips and timings. Now using this GTFS feed, is it possible to optimize the utilization of my fleet's vehicles? Can I schedule the vehicles such that once it completes a trip, it can be assigned to serve a trip of another route?
I have constriants such as no vehicle should be running more than 12 hours, every vehicle will undergo a health check for 2 hrs, etc.
To me this sounds like a case of the Knapsack problem.
If such a project exists, kindly let me know. Is there an algorithm that can solve this problem?
Thanks,
Yash
You're asking a question that is typically assigned to a scheduling system, one which would produce GTFS files from the get-go. In smaller systems, this actually is not difficult to do, but as the number of routes (or "trip patterns") increases, the process gets more complex.
Before you undertake any project like this, I suggest reading over the TCRP manual on scheduling, paying close attention to the terms "cycle time," "headway," and "interlining."
While I'd love to help more, I don't have time right now to get into the specifics. I performed a similar analysis with automatically collected cycle times on a limited set of routes in my masters thesis, starting on page 118.
I hope this helps. If you have any follow-up questions, post a comment and I'll respond when I have time.

trouble with recurrent neural network algorithm for structured data classification

TL;DR
I need help understanding some parts of a specific algorithm for structured data classification. I'm also open to suggestions for different algorithms for this purpose.
Hi all!
I'm currently working on a system involving classification of structured data (I'd prefer not to reveal anything more about it) for which I'm using a simple backpropagation through structure (BPTS) algorithm. I'm planning on modifying the code to make use of a GPU for an additional speed boost later, but at the moment I'm looking for better algorithms than BPTS that I could use.
I recently stumbled on this paper -> [1] and I was amazed by the results. I decided to give it a try, but I have some trouble understanding some parts of the algorithm, as its description is not very clear. I've already emailed some of the authors requesting clarification, but haven't heard from them yet, so, I'd really appreciate any insight you guys may have to offer.
The high-level description of the algorithm can be found in page 787. There, in Step 1, the authors randomize the network weights and also "Propagate the input attributes of each node through the data structure from frontier nodes to root forwardly and, hence, obtain the output of root node". My understanding is that Step 1 is never repeated, since it's the initialization step. The part I quote indicates that a one-time activation also takes place here. But, what item in the training dataset is used for this activation of the network? And is this activation really supposed to happen only once? For example, in the BPTS algorithm I'm using, for each item in the training dataset, a new neural network - whose topology depends on the current item (data structure) - is created on the fly and activated. Then, the error backpropagates, the weights are updated and saved, and the temporary neural network is destroyed.
Another thing that troubles me is Step 3b. There, the authors mention that they update the parameters {A, B, C, D} NT times, using equations (17), (30) and (34). My understanding is that NT denotes the number of items in the training dataset. But equations (17), (30) and (34) already involve ALL items in the training dataset, so, what's the point of solving them (specifically) NT times?
Yet another thing I failed to get is how exactly their algorithm takes into account the (possibly) different structure of each item in the training dataset. I know how this works in BPTS (I described it above), but it's very unclear to me how it works with their algorithm.
Okay, that's all for now. If anyone has any idea of what might be going on with this algorithm, I'd be very interested in hearing it (or rather, reading it). Also, if you are aware of other promising algorithms and / or network architectures (could long short term memory (LSTM) be of use here?) for structured data classification, please don't hesitate to post them.
Thanks in advance for any useful input!
[1] http://www.eie.polyu.edu.hk/~wcsiu/paper_store/Journal/2003/2003_J4-IEEETrans-ChoChiSiu&Tsoi.pdf

What is the difference between an on-line and off-line algorithm?

These terms were used in my data structures textbook, but the explanation was very terse and unclear. I think it has something to do with how much knowledge the algorithm has at each stage of computation.
(Please, don't link to the Wikipedia page: I've already read it and I am still looking for a clarification. An explanation as if I'm twelve years old and/or an example would be much more helpful.)
Wikipedia
The Wikipedia page is quite clear:
In computer science, an online algorithm is one that can process its
input piece-by-piece in a serial fashion, i.e., in the order that the
input is fed to the algorithm, without having the entire input
available from the start. In contrast, an offline algorithm is given
the whole problem data from the beginning and is required to output an
answer which solves the problem at hand. (For example, selection sort
requires that the entire list be given before it can sort it, while
insertion sort doesn't.)
Let me expand on the above:
An offline algorithm requires all information BEFORE the algorithm starts. In the Wikipedia example, selection sort is offline because step 1 is Find the minimum value in the list. To do this, you need to have the entire list available - otherwise, how could you possibly know what the minimum value is? You cannot.
Insertion sort, by contrast, is online because it does not need to know anything about what values it will sort and the information is requested WHILE the algorithm is running. Simply put, it can grab new values at every iteration.
Still not clear?
Think of the following examples (for four year olds!). David is asking you to solve two problems.
In the first problem, he says:
"I'm, going to give you two balls of different masses and you need to
drop them at the same time from a tower.. just to make sure Galileo
was right. You can't use a watch, we'll just eyeball it."
If I gave you only one ball, you'd probably look at me and wonder what you're supposed to be doing. After all, the instructions were pretty clear. You need both balls at the beginning of the problem. This is an offline algorithm.
For the second problem, David says
"Okay, that went pretty well, but now I need you to go ahead and kick
a couple of balls across a field."
I go ahead and give you the first ball. You kick it. Then I give you the second ball and you kick that. I could also give you a third and fourth ball (without you even knowing that I was going to give them to you). This is an example of an online algorithm. As a matter of fact, you could be kicking balls all day.
I hope this was clear :D
An online algorithm processes the input only piece by piece and doesn't know about the actual input size at the beginning of the algorithm.
An often used example is scheduling: you have a set of machines, and an unknown workload. Each machine has a specific speed. You want to clear the workload as fast as possible. But since you don't know all inputs from the beginning (you can often see only the next in the queue) you can only estimate which machine is the best for the current input. This can result in non-optimal distribution of your workload since you cannot make any assumption on your input data.
An offline algorithm on the other hand works only with complete input data. All workload must be known before the algorithm starts processing the data.
Example:
Workload:
1. Unit (Weight: 1)
2. Unit (Weight: 1)
3. Unit (Weight: 3)
Machines:
1. Machine (1 weight/hour)
2. Machine (2 weights/hour)
Possible result (Online):
1. Unit -> 2. Machine // 2. Machine has now a workload of 30 minutes
2. Unit -> 2. Machine // 2. Machine has now a workload of one hour
either
3. Unit -> 1. Machine // 1. Machine has now a workload of three hours
or
3. Unit -> 2. Machine // 1. Machine has now a workload of 2.5 hours
==> the work is done after 2.5 hours
Possible result (Offline):
1. Unit -> 1. Machine // 1. Machine has now a workload of one hour
2. Unit -> 1. Machine // 1. Machine has now a workload of two hours
3. Unit -> 2. Machine // 2. Machine has now a workload of 1.5 hours
==> the work is done after 2 hours
Note that the better result in the offline algorithm is only possible since we don't use the better machine from the start. We know already that there will be a heavy unit (unit 3), so this unit should be processed by the fastest machine.
An offline algorithm knows all about its input data the moment it is invoked. An online algorithm, on the other hand, can get parts or all of its input data while it is running.
An
algorithm is said to be online if it does not
know the data it will be executing on
beforehand. An offline algorithm may see
all of the data in advance.
An on-line algorithm is one that receives a sequence of requests and performs an immediate action in response to each request.
In contrast,an off-line algorithm performs action after all the requests are taken.
This paper by Richard Karp gives more insight on on-line,off-line algorithms.
We can differentiate offline and online algorithms based on the availability of the inputs prior to the processing of the algorithm.
Offline Algorithm: All input information are available to the algorithm and processed simultaneously by the algorithm. With the complete set of input information the algorithm finds a way to efficiently process the inputs and obtain an optimal solution.
Online Algorithm: Inputs come on the fly i.e. all input information are not available to the algorithm simultaneously rather part by part as a sequence or over the time. Upon the availability of an input, the algorithm has to take immediate decision without any knowledge of future input information. In this process, the algorithm produces a sequence of decisions that will have an impact on the final quality of its overall performance.
Eg: Routing in Communication network:
Data Packets from different sources come to the nearest router. More than one communication links are connected to the router. When a new data packet arrive to the router, then the router has to decide immediately to which link the data packet is to be sent. (Assume all links are routed to the destination, all links bandwidth are same, all links are the part of the shortest path to the destination). Here, the objective is to assign each incoming data packet to one of the link without knowing the future data packets in such a way that the load of each link will be balanced. No links should be overloaded. This is a load balancing problem.
Here, the scheduler implemented in the router has no idea about the future data packets, but it has to take scheduling decision for each incoming data packets.
In the contrast a offline scheduler has full knowledge about all incoming data packets, then it efficiently assign the data packets to different links and can optimally balance the load among different links.
Cache Miss problem: In a computer system, cache is a memory unit used to avoid the speed mismatch between the faster processor and the slowest primary memory. The objective of using cache is to minimize the average access time by keeping some frequently accessed pages in the cache. The assumption is that these pages may be requested by the processor in near future. Generally, when a page request is made by the processor then the page is fetched from the primary or secondary memory and a copy of the page is stored in the cache memory. Suppose, the cache is full, then the algorithm implemented in the cache has to take immediate decision of replacing a cache block without knowledge of future page requests. The question arises: which cache block has to be replaced? (In worst case, it may happen that you replace a cache block and very next moment, the processor request for the replaced cache block).
So, the algorithm must be designed in such a way that it take immediate decision upon the arrival of an incoming request with out advance knowledge of entire request sequence. This type of algorithms are known as ONLINE ALGORITHM

Resources