Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am reading through "Computer Architecture: A Quantative Approach, 5th ed" and am looking at an example from Chapter 5 on page 350. Attached is a scan of the example in question. I do not quite follow the logic of how they do things in this example.
My questions are, as follows:
Where is the 0.3ns cycle time coming from?
200/0.3 is roughly 666 cycles, I follow this. However, when plugged back into the CPI equation, it makes no sense: 0.2% (0.002) x 666 is equal to 1.332 and not 1.2. What is going on here?
When they say that "the multiprocessor with all local references is 1.7/0.5 = 3.4 times faster", where are they getting that from? Meaning: I see nowhere in the given information stating that local communication is twice as fast...
Any help would be appreciated.
Where is the 0.3ns cycle time coming from?
That comes from the clock rate of 3.3 GHz. 1 / 3.3 GHz = 0.3ns.
200/0.3 is roughly 666 cycles, I follow this. However, when plugged back into the CPI equation, it makes no sense: 0.2% (0.002) x 666 is equal to 1.332 and not 1.2. What is going on here?
I think you're right. That looks like a misprint. That should be
CPI = 0.5 + 1.33 = 1.83
When they say that "the multiprocessor with all local references is 1.7/0.5 = 3.4 times faster", where are they getting that from? Meaning: I see nowhere in the given information stating that local communication is twice as fast...
They don't say anywhere that local communication is twice as fast. They're dividing the effective CPI that they calculated for the multiprocessor with 0.2% remote references by the base CPI of 0.5. This tells you how many times faster the multiprocessor with all local references is. (Of course it should be about 1.83/0.5 = 3.66 times faster.)
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Recently,I have been trying to understand how the Binary Extended Euclidean Algorithm works at the processor level. This question is all about finding an Inverse element in GF(2^m) with polynomial basis.
Generally I came across the Extended Euclidean Algorithm for evaluating an inverse element but the fact is that it involves too many addition and multiplication operations. The Binary EEA algorithm requires just bit shifting operations (equivalent to division by 2--logical shift right). The algorithm is in this link, page number 8.
In step 3 and 5 of this algorithm, every iteration shifts the parameters u and b by 1 bit to the right adding zero to the MSB at the same time. The loop ends when u == 1 and returns b. My question is how many primitive operations does a processor (say a 32 bit processor for example) perform in step 3 or step 5 of every iteration?
I came across barrel shifter and I am quite confused about how fast the shifting takes place. Should I really consider these primitive operations or should I ignore them if because the shifting may be faster?
It would really help me a lot if someone would show the primitive operations for the case where the size of u is 194 bits.
In case you might be wondering about the denominator x in step 3 and 5 of the algorithm, its the polynomial representation and x means nothing but 10 in binary and parameter u is an N-bit binary number.
There is no generic answer to this question: you can use portable code that will be tedious to optimize or highly machine specific code that will be even more complicated to optimize without breaking.
If you want real performance, you have to use MMX/AVX registers on the maximum width you can get your hands on. Intel provides lightweight wrappers on low-level instructions as macros and inline functions.
Always use unsigned types for your shifting operations to avoid unnecessary steps.
Usually ther is a "right shift" assembly OP code which is able to right shift a register a given number of bits. Such an operation takes one cycle.
This assumes thet your value is already loaded to the register however.
The best answer anyway: Implement this algorithm in a low level language (C, C++) and look at the assembly code produced by the compiler.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
2^(n/2+10 log n)
or
2^n?
I was doing an exercise in MIT OCW 6.006. It has a problem which states that later grows faster than the former. But I cant agree with the proof. I say that the former grows faster than the later. Could someone explain if I am wrong and let me know why. Thanks!
You could frame that differently by pulling out the exponent part, then just ask which is bigger (n/2+10logn) or n.
Here its clear the 2nd will be bigger whenever 10logn is less than half n.
That becomes true when n reaches about 30, so from then on, the second is bigger. (for log base 10)
Lets discuss log base 2 further and when might 10LogN be less than N/2?
Well, thats the same as asking when does logN become less than N/20
Loosely speaking, log_2 is the number of bits needed to describe a number in base 2. So:
log_2(32) gives us 5.
log_2(64) gives us 6.
log_2(128) gives us 7. <-- look here 128:7 is about 18:1
log_2(256) gives us 8.
log_2(512) gives us 9.
log_2(1024) gives us 10.
log_2(64000) gives us ~16.
Now we are looking for when the first value (32,64,128,etc) is more than 20 times the second. As you can see this would happens just past the 128/7 pair, and they rapid get much further apart.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Please recommend an error correcting algorithm for using very strange data channel.
The channel consists of two parts: Corrupter and Eraser.
Corrupter receives a word consisting of 10000 symbols in 3-symbol alphabet, say, {'a','b','c'}.
Corrupter changes each symbol with probability 10%.
Example:
Corrupter input: abcaccbacbbaaacbcacbcababacb...
Corrupter output: abcaacbacbbaabcbcacbcababccb...
Eraser receives corrupter output and erases each symbol with probability 94%.
Eraser produces word of the same length in 4-symbol alphabet {'a','b','c','*'}.
Example:
Eraser input: abcaacbacbbaabcbcacbcababccb...
Eraser output: *******a*****************c**...
So, on eraser output, approximately 6%*10000=600 symbols would not be erased, approximately 90%*600=540 of them would preserve their original values and approximately 60 would be corrupted.
What encoding-decoding algorithm with error correction is best suited for this channel?
What amount of useful data could be transmitted providing > 99.99% probability of successful decoding?
Is it possible to transmit 40 bytes of data through this channel? (256^40 ~ 3^200)
Here's something you can at least analyze:
Break your 40 bytes up into 13 25-bit chunks (with some wastage so this bit can obviously be improved)
2^25 < 3^16 so you can encode the 25 bits into 16 a/b/c "trits" - again wastage means scope for improvement.
With 10,000 trits available you can give each of your 13 encoded byte triples 769 output trits. Pick (probably at random) 769 different linear (mod 3) functions on 16 trits - each function is specified by 16 trits and you take a vector dot product between those trits and the 16 input trits. This gives you your 769 output trits.
Decode by considering all possible (2^25) chunks and pick the one which matches most of the surviving trits. You have some hope of getting the right answer as long as there are at least 16 surviving trits, which I think excel is telling me via BINOMDIST() happens often enough that there is a pretty good chance that it will happen for all of the 13 25-bit chunks.
I have no idea what error rate you get from garbling but random linear codes have a pretty good reputation, even if this one has a short blocksize because of my brain-dead decoding technique. At worst you could try simulating the encoding transmission and decoding of 25-bit chunks and work it out from there. You can get a slightly more accurate lower bound on error rate than above if you pretend that the garbling stage erases as well and so recalculate with a slightly higher probability of erasure.
I think this might actually work in practice if you can afford the 2^25 guesses per 25-bit block to decode. OTOH if this is a question in a class my guess is you need to demonstrate your knowledge of some less ad-hoc techniques already discussed in your class.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Using continued fractions, I'm generating integer ratios between frequencies to a certain precision, along with the error (difference from integer ratio to the real ratio). So I end up with things like:
101 Hz with 200 Hz = 1:2 + 0.0005
61 Hz with 92 Hz = 2:3 - 0.0036
However, I've run into a snag on actually deciding which of these will be more dissonant than others. At first I thought low numbers = better, but something like 1:51 would likely be not very dissonant since it's a frequency up 51 octaves from the other. It might be a screaming high, ear bleeding pitch, but I don't think it would have dissonance.
It seems to me that it must be related to the product of the two sides of the ratio compared to the constituents somehow. 1 * 51 = 51, which doesn't "go up much" from one side. 2 * 3 = 6, which I would think would indicate higher dissonance than 1:51. But I need to turn this feeling into an actual number, so I can compare 5:7 vs 3:8, or any other combinations.
And how could I work error into this? Certainly 1:2 + 0 would be less dissonant than 1:2 + 1. Would it be easier to apply an algorithm that works for the above integer ratios directly to the frequencies themselves? Or does having the integer ratio with an error allow for a simpler calculation?
edit: Thinking on it, an algorithm that could extend to any set of N frequencies in a chord would be awesome, but I get the feeling that would be much more difficult...
edit 2: Clarification:
Let's consider that I am dealing with pure sine waves, and either ignoring the specific thresholds of the human ear or abstracting them into variables. If there are severe complications, then they are ignored. My question is how it could be represented in an algorithm, in that case.
Have a look at Chapter 4 of http://homepages.abdn.ac.uk/mth192/pages/html/maths-music.html. From memory:
1) If two sine waves are just close enough for the human ear to be confused, but not so close that the human ear cannot tell they are different, there will be dissonance.
2) Pure sine waves are extremely rare - most tones have all sorts of harmonics. Dissonance is very likely to occur from colliding harmonics, rather than colliding main tones - to sort of follow your example, two tones many octaves apart are unlikely to be dissonant because their harmonics may not meet, whereas with just a couple of octaves different and loads of harmonics a flute could sound out of tune with a double bass. Therefore dissonance or not depends not only on the frequencies of the main tones, but on the harmonics present, and this has been experimentally demonstrated by constructing sounds with peculiar pseudo-harmonics.
The answer is in Chapter 4 of Music: a Mathematical Offering. In particular, see the following two figures:
consonance / dissonance plotted against the x critical bandwidth in 4.3. History of consonance and dissonance
dissonance vs. frequency in 4.5. Complex tones
Of course you still have to find a nice way to turn these data into a formula / program that gives you a measure of dissonance but I believe this gives you a good start. Good luck!
This will help:
http://www.acs.psu.edu/drussell/demos/superposition/superposition.html
You want to look at superposition.
Discrete or Fast Fourier Transform is the most generic means to get what you're asking for.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I recently went through a video which said that in the relation x->W<-Y, X does not influence y.X has causal relationship to W and W has evidential relationship to Y .So will X not affect Y ?
Let's let W represent "The Lawn is Wet," X represent "It rained recently," and Y represent "The automatic sprinklers were on recently."
Clearly, X influences W: If it rained recently, it is likely that the lawn is wet.
Clearly, Y influences W: If the sprinklers were on recently, it is likely that the lawn is wet.
Clearly, knowing W, we can make inferences about X and Y.
But, does X directly influence Y?
Put differently, does the fact of recent rain (or not) influence whether the automatic sprinklers were on recently?
No. If we know nothing about the state of the lawn, because we didn't look outside, the chance of recent rain is independent of the chance of recent sprinkler activity.
Once we look outside, though, and determine the state of the lawn, then we can draw inferences between rain and sprinkler activity.
If W is not observed, then x and y are independent.
One such v-structured CPD (that is entirely deterministic) looks like this.
Draw X and Y independently as binary variables, and then W is the sum. If you know only X=1, then Y could be 0 and W would be 1, or Y = 1 and W = 2. Knowing X doesn't let you distinguish between those two possibilities.
In general, I think reasoning about v-structures is much easier with deterministic functions than probabilistic ones. Think about what happens when the v-structure is ADD, XOR, and AND and you can usually get specific insights about general claims.
something that helped me understand this was a concrete example that I found in Sections 1.3 and 1.4 of this online book:
It goes through all the cases you have listed above, and for each case explains w/ the running example described in Section 1.4. Please have a look here:
http://people.cs.aau.dk/~uk/papers/pgm-book-I-05.pdf