I am using a stm32f103c8 and I need a function that will return the correct time in microseconds when called from within an interrupt handler. I found the following bit of code online which proports to do that:
uint32_t microsISR()
{
uint32_t ret;
uint32_t st = SysTick->VAL;
uint32_t pending = SCB->ICSR & SCB_ICSR_PENDSTSET_Msk;
uint32_t ms = UptimeMillis;
if (pending == 0)
ms++;
return ms * 1000 - st / ((SysTick->LOAD + 1) / 1000);
}
My understanding of how this works is uses the system clock counter which repeatedly counts down from 8000 (LOAD+1) and when it reaches zero, an interrupt is generated which increments the variable UptimeMills. This gives the time in milliseconds. To get microseconds we get the current value of the system clock counter and divide it by 8000/1000 to give the offset in microseconds. Since the counter is counting down we subtract it from the current time in milliseconds * 1000. (Actually to be correct I believe one should have be added to the # milliseconds in this calculation).
This is all fine and good unless, when this function is called (in an interrupt handler), the system clock counter has already wrapped but the system clock interrupt has not yet been called, then UptimeMillis count will be off by one. This is the purpose of the following lines:
if (pending == 0)
ms++;
Looking at this does not make sense, however. It is incrementing the # ms if there is NO pending interrupt. Indeed if I use this code, I get a large number of glitches in the returned time at the points at which the counter rolls over. So I changed the lines to:
if (pending != 0)
ms++;
This produced much better results but I still get the occasional glitch (about 1 in every 2000 interrupts) which always occurs at a time when the counter is rolling over.
During the interrupt, I log the current value of milliseconds, microseconds and counter value. I find there are two situations where I get an error:
Milli Micros DT Counter Pending
1 1661 1660550 826 3602 0
2 1662 1661374 824 5010 0
3 1663 1662196 822 6436 0
4 1663 1662022 -174 7826 0
5 1664 1663847 1825 1228 0
6 1665 1664674 827 2614 0
7 1666 1665501 827 3993 0
The interrupts are comming in at a regular rate of about 820us. In this case what seems to be happening between interrupt 3 and 4 is that the counter has wrapped but the pending flag is NOT set. So I need to be adding 1000 to the value and since I fail to do so I get a negative elapsed time.
The second situation is as follows:
Milli Micros DT Counter Pending
1 1814 1813535 818 3721 0
2 1815 1814357 822 5151 0
3 1816 1815181 824 6554 0
4 1817 1817000 1819 2 1
5 1817 1816817 -183 1466 0
6 1818 1817637 820 2906 0
This is a very similar situation except in this case the counter has NOT yet wrapped and yet I am already getting the pending interrupt flag which causes me to erronously add 1000.
Clearly there is some kind of race condition between the two competing interrupts. I have tried setting the clock interrupt priority both above and below that of the external interrupt but the problem persists.
Does anyone have any suggestions how to deal with this problem or a suggestion for a different approach to get the time is microseconds within an interrupt handler.
Read UptimeMillis before and after SysTick->VAL to ensure a rollover has not occurred.
uint32_t microsISR()
{
uint32_t ms = UptimeMillis;
uint32_t st = SysTick->VAL;
// Did UptimeMillis rollover while reading SysTick->VAL?
if (ms != UptimeMillis)
{
// Rollover occurred so read both again.
// Must read both because we don't know whether the
// rollover occurred before or after reading SysTick->VAL.
// No need to check for another rollover because there is
// no chance of another rollover occurring so quickly.
ms = UptimeMillis;
st = SysTick->VAL;
}
return ms * 1000 - st / ((SysTick->LOAD + 1) / 1000);
}
Or here is the same idea in a do-while loop.
uint32_t microsISR()
{
uint32_t ms;
uint32_t st;
// Read UptimeMillis and SysTick->VAL until
// UptimeMillis doesn't rollover.
do
{
ms = UptimeMillis;
st = SysTick->VAL;
} while (ms != UptimeMillis);
return ms * 1000 - st / ((SysTick->LOAD + 1) / 1000);
}
On my ESP32 I want to have information about actual time without Wifi connection or an external RTC chip. I started with this simple code
time_t now;
struct tm* timeinfo;
void Check_Time(void) {
time(&now);
timeinfo = localtime(&now);
Serial.println(timeinfo);
}
void setup() {
Serial.begin(115200);
}
void loop() {
Check_Time();
delay(1000);
}
It works since the output is
Thu Jan 1 00:07:57 1970
Thu Jan 1 00:07:58 1970
Thu Jan 1 00:07:59 1970
Thu Jan 1 00:08:00 1970
...
and naturally it starts from 1 Jan 1970. Now I want to update this time to the actual one but I haven't found a direct solution. I know that I could convert a date to a time_t data with the mktime function (is it right?) but how I can pass it to the system? How I should manage this problem?
I got it to work by using:
#include <sys/time.h>
// ...
struct timeval tv;
tv.tv_sec = /* seconds since epoch here */;
tv.tv_usec = /* microseconds here */;
settimeofday(&tv, NULL);
Replacing the comments with the variable that stores the times.
I am updating my time over Bluetooth Low Energy.
Here is it working:
1970 1 1 0 0 16
1970 1 1 0 0 17
1970 1 1 0 0 18
Time set
2020 12 6 22 45 32
2020 12 6 22 45 33
2020 12 6 22 45 34
I am trying to read the values of the TCS34725 color sensor with a PIC16 through I2C. Currently, I am continuously polling the clear register on the TCS. However, every 10 or so times I read the value in the clear register, I get a random drop in the readings. For example, a set of consecutive readings may be [17, 17, 17, 17, 17, 17, 17, 17, 14, 15, 16, 17 ...], repeating.
I have tried interfacing with an Arduino Uno in this same situation and get a consistent reading of 17.
I would like to eliminate the drop in the readings.
The code I have in XC8 for reading the TCS is as follows
void read_colorsensor(void){
unsigned char color_low[4];
unsigned char color_high[4];
int i;
I2C_Master_Start();
I2C_Master_Write(0b01010010); //7bit address 0x29 + Write
I2C_Master_Write(0b10110100); //Write to cmdreg + access&increment clear low reg
I2C_Master_Stop();
I2C_Master_Start(); //Repeated start command for combined I2C
I2C_Master_Write(0b01010011); //7bit address 0x29 + Read
color_low[0] = I2C_Master_Read(1);
color_high[0] = I2C_Master_Read(0);
I2C_Master_Stop();
color[0] = (color_high[0] << 8)|(color_low[0]);
return;
}
I wrote the following code to get L3 cache miss information.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <papi.h>
int main( int argc, char *argv[] ) {
int i;
long long counters[3];
counters[0] = counters[1] = counters[2] = 0;
int PAPI_events[] = {
PAPI_TOT_CYC,
PAPI_L3_TCM,
PAPI_L3_DCA };
PAPI_library_init(PAPI_VER_CURRENT);
i = PAPI_start_counters(PAPI_events, 3);
printf("Measuring instruction count for this printf\n");
PAPI_read_counters(counters, 3);
printf("%lld L3 cache misses %lld L3 cache accesses in %lld cycles"
counters[1], counters[2], counters[0] );
return 0;
}
But I get an error and zero values for counters as below. What could be wrong?
PAPI Error: pfm_find_full_event(RETIRED_MISPREDICTED_BRANCH_INSTRUCTIONS,0x7fff22fe65a0): event not found.
PAPI Error: 1 of 4 events in papi_events.csv were not valid.
Measuring instruction count for this printf
0 L3 cache misses 0 L3 cache accesses in 0 cycles
I checked available counters with papi_avail -a and the counters seem to be supported. CPU information given below.
Available events and hardware information.
--------------------------------------------------------------------------------
PAPI Version : 5.1.1.0
Vendor string and code : GenuineIntel (1)
Model string and code : Intel(R) Xeon(R) CPU E7- 4830 # 2.13GHz (47)
CPU Revision : 2.000000
CPUID Info : Family: 6 Model: 47 Stepping: 2
CPU Max Megahertz : 2128
CPU Min Megahertz : 2128
Hdw Threads per core : 1
Cores per Socket : 8
NUMA Nodes : 4
CPUs per Node : 8
Total CPUs : 32
Running in a VM : no
Number Hardware Counters : 7
Max Multiplex Counters : 64
uname output
2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux
I have to check for the tipping point that a number causes a type of overflow.
If we assume for example that the overflow number is 98, then a very inefficient way of doing that would be to start at 1 and increment 1 at a time. This would take 98 comparisons.
I punched out a better way of doing this as so
What it basically does change the check to the next power of two after a known failing condition, for example we know that 0 fails so we start checking at 1, then 2,4,8,...,128. 128 passes so we check 64+1,64+2,64+4,...,64+32, which passes but we know that 64+16 failed so we start the next round at 1+(64+16)===1+80. Here's a visual:
1 1
2 2
3 4
4 8
5 16
6 32
7 64
81 128 ->
9 1, 64 // 1 + 64
10 2, 64
11 4, 64
12 8, 64
13 16, 64
14 32, 64 ->
15 1, 80
16 2, 80
17 4, 80
18 8, 80
19 16, 80
20 32, 80 ->
21 1, 96
22 2, 96 // done
Is there some better way of doing this?
If you do not know the max number, I think going with your initial approach to find the MIN=64, MAX=128 range is good. Doing a binary search AFTER you find a min/max will be most efficient (eg., look at 96, if it causes overflow, then you know the range is MIN=64, MAX=96). You keep halving the range at each step, you will find solution faster.
Since 98 was your answer, here is how it would pan out with a binary search. This takes 13 steps instead of 22:
// your initial approach
1 1
2 2
3 4
4 8
5 16
6 32
7 64
8 128 ->
// range found, so start binary search
9 (64,128) -> 96
10 (96,128) -> 112
11 (96,112) -> 104
12 (96,104) -> 100
13 (96,100) -> 98 // done
// you may need to do step 14 here to validate that 97 does not cause overflow
// -- depends on your exact requirement
If you know that the "overflow function" is monotonically increasing, you can keep doubling until you go over, and then apply the classic binary search algorithm. This would give you the following search sequence:
1
2
4
8
16
32
64
128 -> over - we have the ends of our range
Run the binary search in [64..128] range
64..128, mid = 96
96..128, mid = 112
96..112, mid = 104
96..104, mid = 100
96..100, mid = 98
96..98, mid = 97
97 - no overflow ==> 98 is the answer
Here's how I implemented this technique in javascript:
function findGreatest(shouldPassCallback) {
function findRange(knownGood, test) {
if (!shouldPassCallback(test)) {
return [knownGood, test];
} else {
return findRange(test, test * 2);
}
}
function binarySearchCompare(min, max) {
if (min > max) {
throw 'Huh?';
}
if (min === max) { return shouldPassCallback(min) ? min : min - 1; }
if (max - min === 1) { return shouldPassCallback(max) ? max : min }
var mid = ~~((min + max) / 2);
if (shouldPassCallback(mid)) {
return binarySearchCompare(mid, max);
} else {
return binarySearchCompare(min, mid);
}
}
var range = findRange(0, 1);
return binarySearchCompare(range[0], range[1]);
}