Maximum value of PCR - mpeg

What is the maximum value of Program Clock Reference(PCR) in MPEG?
I understand that it is derived from a 27MHz clock, periodically loaded into a 42bit register.
PCR(i)=PCR_Base(i) * 300 + PCR_Ext(i)
where PCR_Base is loaded into a 33 bits register
PCR_Ext is loaded into a 9-bit register.
So, the maximum value of PCR w.r.t 27MHz clock is:
PCR = (2^33 - 1)*300 + (2^9 - 1) = 2,576,980,374,811.
=> (2,576,980,374,811/27,000,000) = 95443.7s = 1590.7 min = 26.5 hours
The register overflow happens after 26.5 hours of continuous streaming. Is this understanding correct?

PCR_ext(i) value should be 0 .. 299.
So the maximum PCR = (2^33-1)*300+299 = 2,576,980,377,599

Related

Pulse sensor not showing correct data on ESP32 (Micropython)

I connected a pulse sensor (This one) to my ESP32(WROOM 32 model and micropython) using 3 wires(3.3v, GND, Analog) and I expected to read pulse data on my terminal but at the first time the data I received is 6, after that the data remains 0. When I take my finger off the sensor and put it back, the data that is received still makes no sense (12, 18, ...).
the Onboard LED also blinked just when I put my finger on the sensor.
the tutorial I use for this interface:
here
the code:
from machine import Pin, Signal, ADC, Timer
adc = ADC(Pin(36))
# On my board on = off, need to reverse.
led = Signal(Pin(2, Pin.OUT), invert=True)
MAX_HISTORY = 250
# Maintain a log of previous values to
# determine min, max, and threshold.
history = []
beat = False
beats = 0
def calculate_bpm(t):
global beats
print('BPM:', beats * 6) # Triggered every 10 seconds, * 6 = bpm
beats = 0
timer = Timer(1)
timer.init(period=10000, mode=Timer.PERIODIC, callback=calculate_bpm)
while True:
v = adc.read()
history.append(v)
# Get the tail, up to MAX_HISTORY length
history = history[-MAX_HISTORY:]
minima, maxima = min(history), max(history)
threshold_on = (minima + maxima * 3) // 4 # 3/4
threshold_off = (minima + maxima) // 2 # 1/2
if not beat and v > threshold_on:
beat = True
beats += 1
led.on()
if beat and v < threshold_off:
beat = False
led.off()

How to calculate byte offset in n-way associative cache

From what I know this is the formula of address mapping to cache
| tag | index | offset |
However I'm wondering that how the offset is calculated
For example in the following exercise:
Give a CPU this 32 KB cache module as the following figure.
What is the bit length of tag? (how many tag bits ?)
Total address bit = 32 bits = Tag + Index + Block Offset + byte Offset
1024 sets = 2^10 => set index = 10 bits
Block size = 8 byte = 2^3 => byte offset = 3 bits
4 block/set = 2^2 => block offset = 2 bits
=> Tag = 32 – (10 + 3 + 2) = 17 bits
The offset contains Block Offset + byte Offset but in the link (in the an example catalog) the offset in the answer contain only byte offset not block offset
So do offset contains Block Offset + byte Offset or byte offset only

Problems getting the current time in microseconds with a STM32 device

I am using a stm32f103c8 and I need a function that will return the correct time in microseconds when called from within an interrupt handler. I found the following bit of code online which proports to do that:
uint32_t microsISR()
{
uint32_t ret;
uint32_t st = SysTick->VAL;
uint32_t pending = SCB->ICSR & SCB_ICSR_PENDSTSET_Msk;
uint32_t ms = UptimeMillis;
if (pending == 0)
ms++;
return ms * 1000 - st / ((SysTick->LOAD + 1) / 1000);
}
My understanding of how this works is uses the system clock counter which repeatedly counts down from 8000 (LOAD+1) and when it reaches zero, an interrupt is generated which increments the variable UptimeMills. This gives the time in milliseconds. To get microseconds we get the current value of the system clock counter and divide it by 8000/1000 to give the offset in microseconds. Since the counter is counting down we subtract it from the current time in milliseconds * 1000. (Actually to be correct I believe one should have be added to the # milliseconds in this calculation).
This is all fine and good unless, when this function is called (in an interrupt handler), the system clock counter has already wrapped but the system clock interrupt has not yet been called, then UptimeMillis count will be off by one. This is the purpose of the following lines:
if (pending == 0)
ms++;
Looking at this does not make sense, however. It is incrementing the # ms if there is NO pending interrupt. Indeed if I use this code, I get a large number of glitches in the returned time at the points at which the counter rolls over. So I changed the lines to:
if (pending != 0)
ms++;
This produced much better results but I still get the occasional glitch (about 1 in every 2000 interrupts) which always occurs at a time when the counter is rolling over.
During the interrupt, I log the current value of milliseconds, microseconds and counter value. I find there are two situations where I get an error:
Milli Micros DT Counter Pending
1 1661 1660550 826 3602 0
2 1662 1661374 824 5010 0
3 1663 1662196 822 6436 0
4 1663 1662022 -174 7826 0
5 1664 1663847 1825 1228 0
6 1665 1664674 827 2614 0
7 1666 1665501 827 3993 0
The interrupts are comming in at a regular rate of about 820us. In this case what seems to be happening between interrupt 3 and 4 is that the counter has wrapped but the pending flag is NOT set. So I need to be adding 1000 to the value and since I fail to do so I get a negative elapsed time.
The second situation is as follows:
Milli Micros DT Counter Pending
1 1814 1813535 818 3721 0
2 1815 1814357 822 5151 0
3 1816 1815181 824 6554 0
4 1817 1817000 1819 2 1
5 1817 1816817 -183 1466 0
6 1818 1817637 820 2906 0
This is a very similar situation except in this case the counter has NOT yet wrapped and yet I am already getting the pending interrupt flag which causes me to erronously add 1000.
Clearly there is some kind of race condition between the two competing interrupts. I have tried setting the clock interrupt priority both above and below that of the external interrupt but the problem persists.
Does anyone have any suggestions how to deal with this problem or a suggestion for a different approach to get the time is microseconds within an interrupt handler.
Read UptimeMillis before and after SysTick->VAL to ensure a rollover has not occurred.
uint32_t microsISR()
{
uint32_t ms = UptimeMillis;
uint32_t st = SysTick->VAL;
// Did UptimeMillis rollover while reading SysTick->VAL?
if (ms != UptimeMillis)
{
// Rollover occurred so read both again.
// Must read both because we don't know whether the
// rollover occurred before or after reading SysTick->VAL.
// No need to check for another rollover because there is
// no chance of another rollover occurring so quickly.
ms = UptimeMillis;
st = SysTick->VAL;
}
return ms * 1000 - st / ((SysTick->LOAD + 1) / 1000);
}
Or here is the same idea in a do-while loop.
uint32_t microsISR()
{
uint32_t ms;
uint32_t st;
// Read UptimeMillis and SysTick->VAL until
// UptimeMillis doesn't rollover.
do
{
ms = UptimeMillis;
st = SysTick->VAL;
} while (ms != UptimeMillis);
return ms * 1000 - st / ((SysTick->LOAD + 1) / 1000);
}

altering Newton`s cooling example in Dymola to show sinusoidal behavior

I am trying to alter the Newton cooling problem (link: https://mbe.modelica.university/behavior/equations/physical/#physical-types) so that :
1) T_inf is 300K for the first 5 seconds
2) At T=5, I switch it to sinusoidal wave with T_inf having an average value of 400 K, peak to peak amplitude of 50 K and period of 10 seconds
3) At T=85s, I want to change the period of the sine wave to 0.01 seconds, keeping everything else the same. Simulation has to end in 100s
I am successful in parts 1 and 2, but part 3 isn't running for me.
My code is below.
model MAE5833_Saleem_NewtonCooling_HW2_default
// Types
type Temperature = Real (unit="K", min=0);
type ConvectionCoefficient = Real (unit="W/(m2.K)", min=0);
type Area = Real (unit="m2", min=0);
type Mass = Real (unit="kg", min=0);
type SpecificHeat = Real (unit="J/(K.kg)", min=0);
// Parameters
parameter Temperature T0=400 "Initial temperature";
parameter ConvectionCoefficient h=0.7 "Convective cooling coefficient";
parameter Area A=1.0 "Surface area";
parameter Mass m=0.1 "Mass of thermal capacitance";
parameter SpecificHeat c_p=1.2 "Specific heat";
parameter Real freqHz=0.1 "Frequency of sine wave in from 5 to 85 seconds";
parameter Real freq2=100 "Time period of 0.01s after 85 seconds";
parameter Real amplitude=25 "Peak to peak of 50K";
parameter Real starttime=5;
parameter Real T_init=300;
parameter Real T_new=400;
Temperature T "Temperature";
Temperature T_inf;
initial equation
T = T0 "Specify initial value for T";
equation
m*c_p*der(T) = h*A*(T_inf - T) "Newton's law of cooling";
algorithm
when {time > starttime,time < 85} then
T_inf := (T_new - T_init) + amplitude*Modelica.Math.sin(2*3.14*freqHz*(time - starttime));
elsewhen time > 85 then
T_inf := (T_new - T_init) + amplitude*Modelica.Math.sin(2*3.14*freq2*(time - starttime));
elsewhen time < starttime then
T_inf := T_init;
end when;
annotation (experiment(
StopTime=100,
Interval=0.001,
__Dymola_Algorithm="Rkfix2"));
end MAE5833_Saleem_NewtonCooling_HW2_default;
You have to use an if statement in this case instead of when.
Here is the updated equation section, with some further suggestions below:
equation
m*c_p*der(T) = h*A*(T_inf - T) "Newton's law of cooling";
if time >= starttime and time < 85 then
T_inf = (T_new - T_init) + amplitude*sin(2*Modelica.Constants.pi*freqHz*(time - starttime));
elseif time >= 85 then
T_inf = (T_new - T_init) + amplitude*sin(2*Modelica.Constants.pi*freq2*(time - starttime));
else
T_inf = T_init;
end if;
you can use sin instead of Modelica.Math.sin, as the function is built in
use Modelica.Constants.pi instead of defining pi yourself
I have merged your algorithm into the equation section. Don't use an algorithm section unless there is a very good reason to do so.

How to find memory and runtime used by a NuSMV model

Given a NuSMV model, how to find its runtime and how much memory it consumed?
So the runtime can be found using this command at system prompt: /usr/bin/time -f "time %e s" NuSMV filename.smv
The above gives the wall-clock time. Is there a better way to obtain runtime statistics from within NuSMV itself?
Also how to find out how much RAM memory the program used during its processing of the file?
One possibility is to use the usage command, which displays both the amount of RAM currently being used, as well as the User and the System time used by the tool since when it was started (thus, usage should be called both before and after each operation which you want to profile).
An example execution:
NuSMV > usage
Runtime Statistics
------------------
Machine name: *****
User time 0.005 seconds
System time 0.005 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 6932K
Virtual text size = 8139K
Virtual data size = 34089K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 30487K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 0
Minor page faults = 2607
Swaps = 0
Input blocks = 0
Output blocks = 0
Context switch (voluntary) = 9
Context switch (involuntary) = 0
NuSMV > reset; read_model -i nusmvLab.2018.06.07.smv ; go ; check_property ; usage
-- specification (L6 != pc U cc = len) IN mm is true
-- specification F (min = 2 & max = 9) IN mm is true
-- specification G !((((max > arr[0] & max > arr[1]) & max > arr[2]) & max > arr[3]) & max > arr[4]) IN mm is true
-- invariant max >= min IN mm is true
Runtime Statistics
------------------
Machine name: *****
User time 47.214 seconds
System time 0.284 seconds
Average resident text size = 0K
Average resident data+stack size = 0K
Maximum resident size = 270714K
Virtual text size = 8139K
Virtual data size = 435321K
data size initialized = 3424K
data size uninitialized = 178K
data size sbrk = 431719K
Virtual memory limit = -2147483648K (-2147483648K)
Major page faults = 1
Minor page faults = 189666
Swaps = 0
Input blocks = 48
Output blocks = 0
Context switch (voluntary) = 12
Context switch (involuntary) = 145

Resources