from 16bit unsigned value to minutes and seconds - performance

I'm writing some PIC assembly code to compute the remaining time of a CD track based on elapsed minutes and seconds and total track length (16 bit unsigned value, in seconds).
The elapsed minutes and seconds are two 8bit unsigned values (two GPR register), the total track length is a two bytes value (hi-byte and lo-byte).
I need to compute the remaining time, expressed in minutes and seconds.
I tried computing the total elapsed seconds (elapsed_minutes * 60 + elapsed_seconds) subtracting it to the total track length. Now I face the problem how to convert back such result in a MM:SS format. Do I have to divide by 60? take the quotient (minutes) and the remainder (seconds)?

Yes, you divide by 60 to get minutes and the remainder is seconds. It's just algebra, not magic!

Related

How to calculate phone call length in 6 second increments

We log phone calls to our SQL server and to calculate billing we need to calculate in 6 second increments where they pay for all full or partial 6 second increments of time in the call.
We have the call length as a number in seconds, and we can do the work using a case statement to calculate it. I am looking for a more time / clock cycle efficient way to do this.
Has anyone else already done this and has a query they would be willing to share?
Examples:
Call is 30 seconds in length, since 30 is divisible by 6 (no remainder) we bill for 30 seconds.
Call is 0 seconds in length. No bill.
Call is 32 seconds in length, 32 is not divisible by 6 so bill for 36 seconds.
You should bill for 6*ceil(time/6.0) seconds.

unable to reduce bytes of a midi file

I am trying to do some operations on MIDI tracks, such as increasing/decreasing the playback speed.
For those who want more detail: To change playback speed, I need to divide 'delta times' of each track by the multiplier. Eg. If I want to speed up the track x2 then I divide the delta times by 2. Delta times are stored as variable length qualities so if I divide the delta times, I need to update the track's size by reducing the size in bytes so as to keep the state of the track consistent (because shorter delta times mean less number of bytes needed to store variable length quantity).
In my struct, the track length (size in bytes of the entire track) is stored as uint32_t. The problem occurs when I try to store the changed track size back. So lets say if my original track size was 3200 and after reducing the delta times the difference in bytes is 240, then I simply subtract this difference from the original length. However, when I use the 'du' command to check the new file size, the file size inflates heavily. Like it goes from somewhere like 16 kB to 2000 kB. I dont understand why.

Calculating CPU Performance in MIPS

i was taking an exam earlier and i memorized the questions that i didnt know how to answer but somehow got it correct(since the online exam using electronic classrom(eclass) was done through the use of multiple choice.. The exam was coded so each of us was given random questions at random numbers and random answers on random choices, so yea)
anyways, back to my questions..
1.)
There is a CPU with a clock frequency of 1 GHz. When the instructions consist of two
types as shown in the table below, what is the performance in MIPS of the CPU?
-Execution time(clocks)- Frequency of Appearance(%)
Instruction 1 10 60
Instruction 2 15 40
Answer: 125
2.)
There is a hard disk drive with specifications shown below. When a record of 15
Kbytes is processed, which of the following is the average access time in milliseconds?
Here, the record is stored in one track.
[Specifications]
Capacity: 25 Kbytes/track
Rotation speed: 2,400 revolutions/minute
Average seek time: 10 milliseconds
Answer: 37.5
3.)
Assume a magnetic disk has a rotational speed of 5,000 rpm, and an average seek time of 20 ms. The recording capacity of one track on this disk is 15,000 bytes. What is the average access time (in milliseconds) required in order to transfer one 4,000-byte block of data?
Answer: 29.2
4.)
When a color image is stored in video memory at a tonal resolution of 24 bits per pixel,
approximately how many megabytes (MB) are required to display the image on the
screen with a resolution of 1024 x768 pixels? Here, 1 MB is 106 bytes.
Answer:18.9
5.)
When a microprocessor works at a clock speed of 200 MHz and the average CPI
(“cycles per instruction” or “clocks per instruction”) is 4, how long does it take to
execute one instruction on average?
Answer: 20 nanoseconds
I dont expect someone to answer everything, although they are indeed already answered but i am just wondering and wanting to know how it arrived at those answers. Its not enough for me knowing the answer, ive tried solving it myself trial and error style to arrive at those numbers but it seems taking mins to hours so i need some professional help....
1.)
n = 1/f = 1 / 1 GHz = 1 ns.
n*10 * 0.6 + n*15 * 0.4 = 12 ns (=average instruction time) = 83.3 MIPS.
2.)3.)
I don't get these, honestly.
4.)
Here, 1 MB is 10^6 bytes.
3 Bytes * 1024 * 768 = 2359296 Bytes = 2.36 MB
But often these 24 bits are packed into 32 bits b/c of the memory layout (word width), so often it will be 4 Bytes*1024*768 = 3145728 Bytes = 3.15 MB.
5)
CPI / f = 4 / 200 MHz = 20 ns.

What does it mean to multiply a duration by a duration?

To my surprise, this compiled
fmt.Println(time.Second * time.Second)
The result is nonsense
277777h46m40s
It doesn't make any sense to multiply a duration by duration and get another duration.
What's going on?
The Duration type is simply an int64 representing the duration as a nanosecond count
type Duration int64
A Duration represents the elapsed time between two instants as an int64 nanosecond count.
So multiplying one duration by another gives the result of multiplying the number of nanoseconds in each. In my example, this gives a billion billion nanoseconds, or 277777h46m40s. Nonsense, but well-defined!

Difficult to understand the Gaussian Random Timer?

I have read the Gaussian Random Timer info in jmeter user manual but it is difficult to understand. any one have idea related to this please explain with example highly appreciated. Thanks in advance.
The Gaussian Random Timer has a random deviation (based on Gauss curve distribution) around the constant delay offset.
For example:
Deviation: 100 ms
Constant Delay Offset: 300 ms
The delay will vary between 200 ms (300 - 100) and 400 ms (300 + 100) based on Gauss distribution for about 68% of the cases.
I'll try to explain it with one of the examples already posted:
Constant delay offset: 1000 ms
Deviation: 500 ms
Approximately 68% of the delays will be between [500, 1500] ms (=[1000 - 500, 1000 + 500] ms).
According to the docs (emphasis mine):
The total delay is the sum of the Gaussian distributed value (with mean 0.0 and standard deviation 1.0) times the deviation value you specify, and the offset value
Apache JMeter invokes Random.nextGaussian()*range to calculate the delay. As explained in the Wikipedia, the value ofnextGaussian() will be between [-1,1] only for about 68% of the cases. In theory, it could have any value (though the probability to get values outside of this interval decreases very quickly with the distance to it).
As a proof, I have written a simple JMeter test that launches one thread with a dummy sampler and a Gaussian Random Timer: 3000 ms constant delay, 2000 ms deviation:
To rule out cpu load issues, I have configured an additional concurrent thread with another dummy sampler and a Constant Timer: 5000 ms:
The results are quite enlightening:
Take for instance samples 10 and 12: 9h53'04.449" - 9h52'57.776" = 6.674", that is a deviation of 3.674" in contrast to the 2.000" configured! You can also verify that the constant timer only deviates about 1ms if at all.
I could find a very nice explanation of these gaussian timers in the Gmane jmeter user's list: Timer Question.
Gaussian Random Timer is nearly the same as Uniform Random Timer.
In Uniform Random Timer the variation around constant offset has a linear distribution
In Gaussian Random Timer, the variation around constant offset has a gaussian curve distribution.
Constant delay offset(mu)=300 ms,deviation(si)=100 ms
mu-si=200,mu+si=400,There are 68% chances of the time gap between two threads are in range of[200,400]
mu-2(si)=100,mu+2(si)=500,There are 95% chances of time gap between two threads are in range of[100,500]
mu-3(si)=0,mu+3(si)=300,there are 99.7% chances of the time gap between two consecutive threads are in range of[0,600]
when you go on like this some where you will get 100% probability that time gap between two threads is 100%
I am restricting my self to 3 iterations because mu-4(si) yields a negative value and time elapsed is always a positive value in this universe.
But it will be very unrealistic to depend on gaussian timer as we have constant timer and constant through put timer with no standard deviation(si).
Hope that it helps.

Resources