I have a situation with catching the performance monitoring interrupt (PMI - especially instruction counter) on qemu-kvm. The code below works fine on real machine (Intel Core TM i5-4300U) but on qemu-kvm (qemu-system-x86_64 -cpu host), I do not see even one PMI. Though the counter works normally. I can check it increments well.
However, I have tested with Linux kernel, and it catches the overflow interrupt very well on the same qemu-kvm. So there is obviously a step I am missing when it comes to configure the performance monitoring counter on Qemu-kvm.
Can someone point it out to me?
Here is the pseudo-code:
#define LAPIC_SVR 0xF0
#define LAPIC_LVT_PERFM 0x340,
#define CPU_LOCAL_APIC 0xFFFFFFFFBFFFE000
#define NMI_DELIVERY_MODE 0x4 << 8 //NMI
#define MSR_PERF_GLOBAL_CTRL 0x38F
#define MSR_PERF_FIXED_CTRL 0x38D
#define MSR_PERF_FIXED_CTR0 0x309
#define MSR_PERF_GLOBAL_OVF_CTRL 0x390
/*Configure LAPIC*/
apic_base = Msr::read<Paddr>(Msr::IA32_APIC_BASE)
map(CPU_LOCAL_APIC, apic_base & 0xFFFFF000) // No caching, etc.
Msr::write (Msr::IA32_APIC_BASE, apic_base | 0x800);
write (LAPIC_SVR, read (LAPIC_SVR) | 0x100);
*reinterpret_cast<uint32 volatile *>(CPU_LOCAL_APIC + LAPIC_LVT_PERFM) = NMI_DELIVERY_MODE;
/*Configure MSR_PERF_FIXED_CTR0 to have overflow interrupt*/
Msr::write(Msr::MSR_PERF_GLOBAL_CTRL, Msr::read<uint64>(Msr::MSR_PERF_GLOBAL_CTRL) | (1ull<<32)); // enable IA32_PERF_FIXED_CTR0
Msr::write(Msr::MSR_PERF_FIXED_CTRL, 0xa); // configure IA32_PERF_FIXED_CTR0 to count in user mode and interrupt on overflow
Msr::write(Msr::MSR_PERF_FIXED_CTR0, (1<<48) - 0x1000); // overflow after 0x1000 instruction
Msr::write(Msr::MSR_PERF_GLOBAL_OVF_CTRL, Msr::read<uint64>(Msr::MSR_PERF_GLOBAL_OVF_CTRL) & ~(1UL<<32)); // clear overflow condition
Related
I'm using ESP-IDF as framework.
I know that Brownout detector was trigerred error is come from low voltage detector that detect low voltage occurs. Usually the MCU will restart automatically when that error occurs.
Yes that detector can be setup, but can I handle that error in software like how esp-idf handle error with using convention esp_err_t? So I can just continue the runtime in my MCU without restarted by such error.
What I mean handle is like how high level programming using try-catch concept.
It doesn't make any sense to try to "catch" a brownout.
When brownout detection triggers, it means that the ESP32 isn't getting enough power to run reliably. If it can't run reliably, it's not helpful to try to catch an exception indicating that because the exception handler also wouldn't run reliably.
If you're seeing this problem, there's one fix for it and that's to supply enough power to your ESP32 and whatever circuitry you have it connected to. That's it, that's what you do. That means figure out how much current the entire project draws and use a power source that's rated to supply more than that amount of current. If you're using a "wall wart" AC/DC adapter, use one that's rated for a lot more current as many of them can't deliver what they promise to.
The CPU is reset after this error occurs. There may be a way to find out the reason for the reset when the CPU restarts. As with STM32 MCUs, the RCC (Reset and Clock Controller) register can be read. During my research I found a solution that can be used with ESP32.
#include <rom/rtc.h>
void print_reset_reason(RESET_REASON reason)
{
switch (reason)
{
/**<1, Vbat power on reset*/
case 1 : Serial.println ("POWERON_RESET");break;
/**<3, Software reset digital core*/
case 3 : Serial.println ("SW_RESET");break;
/**<4, Legacy watch dog reset digital core*/
case 4 : Serial.println ("OWDT_RESET");break;
/**<5, Deep Sleep reset digital core*/
case 5 : Serial.println ("DEEPSLEEP_RESET");break;
/**<6, Reset by SLC module, reset digital core*/
case 6 : Serial.println ("SDIO_RESET");break;
/**<7, Timer Group0 Watch dog reset digital core*/
case 7 : Serial.println ("TG0WDT_SYS_RESET");break;
/**<8, Timer Group1 Watch dog reset digital core*/
case 8 : Serial.println ("TG1WDT_SYS_RESET");break;
/**<9, RTC Watch dog Reset digital core*/
case 9 : Serial.println ("RTCWDT_SYS_RESET");break;
/**<10, Instrusion tested to reset CPU*/
case 10 : Serial.println ("INTRUSION_RESET");break;
/**<11, Time Group reset CPU*/
case 11 : Serial.println ("TGWDT_CPU_RESET");break;
/**<12, Software reset CPU*/
case 12 : Serial.println ("SW_CPU_RESET");break;
/**<13, RTC Watch dog Reset CPU*/
case 13 : Serial.println ("RTCWDT_CPU_RESET");break;
/**<14, for APP CPU, reseted by PRO CPU*/
case 14 : Serial.println ("EXT_CPU_RESET");break;
/**<15, Reset when the vdd voltage is not stable*/
case 15 : Serial.println ("RTCWDT_BROWN_OUT_RESET");break;
/**<16, RTC Watch dog reset digital core and rtc module*/
case 16 : Serial.println ("RTCWDT_RTC_RESET");break;
default : Serial.println ("NO_MEAN");
}
}
void setup() {
Serial.begin(115200);
delay(2000);
Serial.println("CPU0 reset reason: ");
print_reset_reason(rtc_get_reset_reason(0));
Serial.println("CPU1 reset reason: ");
print_reset_reason(rtc_get_reset_reason(1));
}
void loop() {}
Related Links
GitHub - How can I read the reset reason?
https://www.esp32.com/viewtopic.php?t=6855 says look at the components/esp_system/port/brownout.c for how to catch the interrupt. Set the high 2.8V threshold so you'll have a bit of time. The default is the low 2.43V threshold and often doesn't have time to print the entire ~40 character message. Pre-erase flash if you are trying to save something.
Use a bigger power supply capacitor--100uF is probably too small.
A use-case for this might look like a GPS that wants to save some warm-start data if it can, but doesn't want to wear out the flash by saving it all the time. If the data is missing or corrupt, there is a recovery procedure that just takes longer.
Anti-pattern: don't depend on the brownout detector to put a system in a safe state. Those big capacitors get smaller over time.
edit: actual testing, 4700uf does not provide a reliable 1/30 second, but 2 X 4700uF does. (Just a wroom32, nothing else.) I think 10,000uF is a heavy lift for my wall-wart to start up.
0 idle0=12954 idle1=25465 FPS=29.9967 sampleRate= 306822.9
0 idle0=12954 idle1=25465 FPS=30.0003 sampleRate= 306836.3
2.80V Brownout...warning...
1 idle0=14692 idle1=23489 FPS=30.0012 sampleRate= 306848.5
1 idle0=14692 idle1=23489 FPS=29.9976 sampleRate= 306859.1
2.43V Brownout...restart...
ets Jul 29 2019 12:21:46
rst:0xc (SW_CPU_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:2
load:0x3fff0030,len:6664
load:0x40078000,len:14848
load:0x40080400,len:3792
0x40080400: _init at ??:?
entry 0x40080694
I (27) boot: ESP-IDF v4.4.3-dirty 2nd stage bootloader
I (27) boot: compile time 07:37:04
I (27) boot: chip revision: 3
I (30) boot_comm: chip revision: 3, min. bootload�
The 2 X 4700uF carried the CPU pretty far into the reboot.
Here's the quick and dirty test code; it needs some thought about what should happen if there is a temporary brownout that doesn't trip the reset, and maybe a bit more understanding of how to return from the interrupt. It was retriggering immediately before I set the lower voltage in the isr. The calling code can test the static counter in the 30 FPS loop; above it is the 0 or 1 at the start of the line.
// modified from components/esp_system/port/brownout.c
// Copyright 2015-2017 Espressif Systems (Shanghai) PTE LTD
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include "esp_private/system_internal.h"
#include "driver/rtc_cntl.h"
#include "esp_rom_sys.h"
#include "soc/soc.h"
#include "soc/cpu.h"
#include "soc/rtc_periph.h"
#include "hal/cpu_hal.h"
#include "hal/brownout_hal.h"
#include "sdkconfig.h"
#if defined(CONFIG_ESP32_BROWNOUT_DET_LVL)
#define BROWNOUT_DET_LVL CONFIG_ESP32_BROWNOUT_DET_LVL
#elif defined(CONFIG_ESP32S2_BROWNOUT_DET_LVL)
#define BROWNOUT_DET_LVL CONFIG_ESP32S2_BROWNOUT_DET_LVL
#elif defined(CONFIG_ESP32S3_BROWNOUT_DET_LVL)
#define BROWNOUT_DET_LVL CONFIG_ESP32S3_BROWNOUT_DET_LVL
#elif defined(CONFIG_ESP32C3_BROWNOUT_DET_LVL)
#define BROWNOUT_DET_LVL CONFIG_ESP32C3_BROWNOUT_DET_LVL
#elif defined(CONFIG_ESP32H2_BROWNOUT_DET_LVL)
#define BROWNOUT_DET_LVL CONFIG_ESP32H2_BROWNOUT_DET_LVL
#else
#define BROWNOUT_DET_LVL 0
#endif
#if SOC_BROWNOUT_RESET_SUPPORTED
#define BROWNOUT_RESET_EN true
#else
#define BROWNOUT_RESET_EN false
#endif // SOC_BROWNOUT_RESET_SUPPORTED
int sBrownOut = 0;
extern void esp_brownout_disable(void);
extern void my_esp_brownout_disable(void);
#ifndef SOC_BROWNOUT_RESET_SUPPORTED
static void my_rtc_brownout_isr_handler(void *arg)
{
sBrownOut++;
brownout_hal_intr_clear();
// change to level 0...prevents immediate retrigger...
brownout_hal_config_t cfg = {
.threshold = 0,//BROWNOUT_DET_LVL,
.enabled = true,
.reset_enabled = BROWNOUT_RESET_EN,
.flash_power_down = true,
.rf_power_down = true,
};
brownout_hal_config(&cfg);
if(sBrownOut==1){
esp_rom_printf("\r\n2.80V Brownout...warning...\r\n");
}
else{
esp_cpu_stall(!cpu_hal_get_core_id());
esp_reset_reason_set_hint(ESP_RST_BROWNOUT);
esp_rom_printf("\r\n2.43V Brownout...restart...\r\n");
esp_restart_noos();
}
return; // dead code follows...
/* Normally RTC ISR clears the interrupt flag after the application-supplied
* handler returns. Since restart is called here, the flag needs to be
* cleared manually.
*/
brownout_hal_intr_clear();
/* Stall the other CPU to make sure the code running there doesn't use UART
* at the same time as the following esp_rom_printf.
*/
esp_cpu_stall(!cpu_hal_get_core_id());
esp_reset_reason_set_hint(ESP_RST_BROWNOUT);
esp_rom_printf("\r\n***Brownout detector was triggered\r\n\r\n");
esp_restart_noos();
}
#endif // not SOC_BROWNOUT_RESET_SUPPORTED
void my_esp_brownout_init(void)
{
esp_brownout_disable();
brownout_hal_config_t cfg = {
.threshold = 7,//BROWNOUT_DET_LVL, // level 7 is the highest voltage, earliest possible warning, most time left...
.enabled = true,
.reset_enabled = BROWNOUT_RESET_EN,
.flash_power_down = false, // if this does what it says,
.rf_power_down = false, // probably don't want it first time
};
brownout_hal_config(&cfg);
#ifndef SOC_BROWNOUT_RESET_SUPPORTED
rtc_isr_register(my_rtc_brownout_isr_handler, NULL, RTC_CNTL_BROWN_OUT_INT_ENA_M);
brownout_hal_intr_enable(true);
#endif // not SOC_BROWNOUT_RESET_SUPPORTED
}
void my_esp_brownout_disable(void)
{
brownout_hal_config_t cfg = {
.enabled = false,
};
brownout_hal_config(&cfg);
#ifndef SOC_BROWNOUT_RESET_SUPPORTED
brownout_hal_intr_enable(false);
rtc_isr_deregister(my_rtc_brownout_isr_handler, NULL);
#endif // not SOC_BROWNOUT_RESET_SUPPORTED
}
I'm implementing a simple device driver. The program that uses this driver takes in arguments from the user whether to use demand paging or prefetching(fetches next page only). But when the user requests for prefetching is should send this information to the driver. The problem is vm_fault has a standard structure as follows:
int (*fault)(struct vm_area_struct *vma, struct vm_fault *vmf);
So how to incorporate this additional information of prefetching into these, so that I can use it to write a different routine for prefetching?
Or is there any other way to achieve this?
[EDIT]
To give a clearer picture:
This how a program takes input.
./user_prog [filename] --prefetch
The user_prog sets some flags in it, now how to send these flags information to dev.c(the driver file), as all the arguments to functions are fixed like above fault(). I hope this gives more clarification.
You can use the flags in mmap() to pass your custom flags too.
void *mmap(void *addr, size_t length, int prot, int flags,
int fd, off_t offset);
Make sure your custom flag values uses bits different from the flag values used by mmap(). From the manpage, the macros are defined in sys/mman.h. Find the exact values(may vary across systems) with echo '#include <sys/mman.h>' | gcc -E - -dM | grep MAP_*. My system has this:
#define MAP_32BIT 0x40
#define MAP_TYPE 0x0f
#define MAP_EXECUTABLE 0x01000
#define MAP_FAILED ((void *) -1)
#define MAP_PRIVATE 0x02
#define MAP_ANON MAP_ANONYMOUS
#define MAP_LOCKED 0x02000
#define MAP_STACK 0x20000
#define MAP_NORESERVE 0x04000
#define MAP_HUGE_SHIFT 26
#define MAP_POPULATE 0x08000
#define MAP_DENYWRITE 0x00800
#define MAP_FILE 0
#define MAP_SHARED 0x01
#define MAP_GROWSDOWN 0x00100
#define MAP_HUGE_MASK 0x3f
#define MAP_HUGETLB 0x40000
#define MAP_FIXED 0x10
#define MAP_ANONYMOUS 0x20
#define MAP_NONBLOCK 0x10000
Some non-clashing flags would be 0x200 and 0x400.
I'm a beginner in C language. I'm trying to operate on EEPROM memory in my ATmega 8 and ATtiny2313.
Based on this tutorial I've created the following codes:
1) writes a number to place 5 in uC's eeprom
#define F_CPU 1000000UL
#include <avr/eeprom.h>
int main()
{
number=5;
eeprom_update_byte (( uint8_t *) 5, number );
while (1);
{
}
}
2) blinks the LED n times, where n is the number read from place 5 in eeprom
#define F_CPU 1000000UL
#include <avr/io.h>
#include <util/delay.h>
#include <avr/eeprom.h>
int main()
{
DDRB=0xFF;
_delay_ms(1000);
int number;
number=eeprom_read_byte (( uint8_t *) 5) ;
for (int i=0; i<number; i++) //blinking 'number' times
{
PORTB |= (1<<PB3);
_delay_ms(100);
PORTB &= (0<<PB3);
_delay_ms(400);
}
while (1);
{
}
}
The second program blinks the led many times, and it's never the amount which is supposed to be in eeprom. What's the problem? This happens in both atmega8 and attiny2313.
EDIT:
Console results after compilation of the first program:
18:01:55 **** Incremental Build of configuration Release for project eeprom ****
make all
Invoking: Print Size
avr-size --format=avr --mcu=attiny2313 eeprom.elf
AVR Memory Usage
Device: attiny2313
Program: 102 bytes (5.0% Full)
(.text + .data + .bootloader)
Data: 0 bytes (0.0% Full)
(.data + .bss + .noinit)
Finished building: sizedummy
18:01:56 Build Finished (took 189ms)
That is one of the every time failures for beginners :-)
If you compile simply with avr-gcc <source> -o <out> you will get the wrong results here, because you need optimization! The write procedure MUST be optimized to fulfil the correct write access! So please use '-Os' or '-O3' for compiling with avr-gcc!
If you have no idea if your problem comes from read or write the eeprom, read your eeprom data with avarice/avrdude or similar tools.
The next pitfall can be, that you erase your eeprom section if you program your flash. So please have a look what your programmer really do! A full chip erase erases the eeprom as well.
Next pitfall: What fuses you have set? You are running with the expected clock rate? Maybe you have programmed internal clock and your outside crystal seems to be working with wrong speed?
Another one: Just have a look for the fuses again! JTAG pins switched off? Maybe you see only JTAG flickering :-)
Please add the compiler and programming commands to your question!
I have a ATMega16 and have looped the Rx Tx (just connected the Rx to the Tx), to send and receive one char in a loop. But i only seems to be receiving 0x00 instead of the char i send.
I have the CPU configured to 1MHz.
But my thought is that since the Rx and Tx are just looped, it shouldn't matter what speed i set, since both are the same?
So basically, I'm trying to get a LED to flash at PORTC when receiving the correct char.
Here is the code:
#ifndef F_CPU
#define F_CPU 10000000
#endif
#define BAUD 9600
#define BAUDRATE ((F_CPU)/(BAUD*16)-1)
#include <avr/io.h>
#include <util/delay.h>
void uart_init(void){
UBRRH = (BAUDRATE>>8);
UBRRL = BAUDRATE;
UCSRB = (1<<TXEN) | (1<<RXEN);
UCSRC = (1<<URSEL) | (1<<UCSZ0) | (1<<UCSZ1);
}
void uart_transmit (unsigned char data){
while (!(UCSRA & (1<<UDRE)));
UDR = data;
}
unsigned char uart_recive(void){
while(!(UCSRA) & (1<<RXC));
return UDR;
}
int main(void)
{
uart_init();
unsigned char c;
PORTC = 0xff;
DDRC = 0xff;
while(1)
{
_delay_ms(200);
uart_transmit(0x2B);
c = uart_recive();
if(c==0x2B){
PORTC = PORTC ^ 0xff;
}
}
}
Any thoughts of what i am doing wrong?
The code seems right.
Thing you may have to check:
if your baudrate is the one you should have
if you try to send a char like 'p'; now you are sending a '+'
check your port configuration and see if it matches to your configuration
I think the last one is the problem.
You can try this code from ATMega manual:
/* Set frame format: 8data, 2stop bit */
UCSRC = (1<<URSEL)|(1<<USBS)|(3<<UCSZ0);
After building your program, go to your port configuration and make sure it it set on 8 bits data format and 2 stop bits. Then test it on you microcontroller and see what happens. Please come back with the result.
Consider real baudrate accuracy. See e.g. http://www.wormfood.net/avrbaudcalc.php?postbitrate=9600&postclock=1, AVR provides 7.5% error for 9600baud # 1MHz clock, which is rather high error. Depend what you are sending and receiving. "Normally" you can see a garbage, if you receive permanently 0x00s it looks like another problem.
your F_CPU is set to 10MHz.
you sad that it is configured to 1Mhz.
Also check your Fuses if you really activated the crystal.
If you just use the internal oscillator: it has a relatively large error so that your UART timings may be broken (i never got problems using internal oscillator for debugging).
Another source of error may be your F_CPU definition. Mostly this Preprocessor constant is defined already (propably also wrong) in Makefile (or in IDE project settings) so your #define in Code has not affect since the #ifndef
PORTC pins(TDI,TMS,TCK,SDA) always high because these pins for JTAG and JTAG is enable by default. if you want to use PORTC in your application you have to Disable the JTAG by setting fuse bit. for atmega16 JTAGEN is fuse bit set it to 1(means unprogrammed). in case of fuse bit 0(means programmed) and 1(means unprogrammed) one more thing if you use more than 8MHz you have to set fuse bit otherwise your program will give unexpected or wrong result thanks.
I've been using WinDbg to debug dump files for a while now.
There's a nice "trick" that works with x86 native programs, you can scan the stack for the CONTEXT_ALL flags (0x1003f).
In x64 the CONTEXT_ALL flags apparently don't contain 0x1003f ...
Now the problem is that sometimes when you mix native with managed code, the regular methods of finding exceptions (like .exc or .lastevent).
What is the equivalent of this 0x1003f in x64 ? is there such a constant ?
EDIT:
BTW, if you were wondering, in theory it should have been 10003f because of the definitions:
#define CONTEXT_I386 0x00010000
#define CONTEXT_AMD64 0x00100000
#define CONTEXT_CONTROL 0x00000001L // SS:SP, CS:IP, FLAGS, BP
#define CONTEXT_INTEGER 0x00000002L // AX, BX, CX, DX, SI, DI
#define CONTEXT_SEGMENTS 0x00000004L // DS, ES, FS, GS
#define CONTEXT_FLOATING_POINT 0x00000008L // 387 state
#define CONTEXT_DEBUG_REGISTERS 0x00000010L // DB 0-3,6,7
#define CONTEXT_EXTENDED_REGISTERS 0x00000020L // cpu specific extensions
#define CONTEXT_FULL (CONTEXT_CONTROL | CONTEXT_INTEGER | CONTEXT_SEGMENTS)
#define CONTEXT_ALL (CONTEXT_FULL | CONTEXT_FLOATING_POINT | CONTEXT_DEBUG_REGISTERS | CONTEXT_EXTENDED_REGISTERS)
#define CONTEXT_I386_FULL CONTEXT_I386 | CONTEXT_FULL
#define CONTEXT_I386_ALL CONTEXT_I386 | CONTEXT_ALL
#define CONTEXT_AMD64_FULL CONTEXT_AMD64 | CONTEXT_FULL
#define CONTEXT_AMD64_ALL CONTEXT_AMD64 | CONTEXT_ALL
But it's not...
I usually use the segment register values for my key to finding context records (ES and DS have the same value and are next to each other in the CONTEXT stucture). The flags trick is neat too though.
Forcing an exception in a test application then digging the context record structure off the stack, looks like the magic value in my case would be 0x10001f:
0:000> dt ntdll!_context 000df1d0
...
+0x030 ContextFlags : 0x10001f
...
+0x03a SegDs : 0x2b
+0x03c SegEs : 0x2b
...
Also note that the ContextFlags value is not at the start of the structure, so if you find that value you'll have to subtract ##c++(#FIELD_OFFSET(ntdll!_CONTEXT, ContextFlags)) from it to get the base of the context structure.
Also, just in case it wasn't obvious, this value comes from a sample size of exactly one. It may not be correct in your environment and it is of course subject to change (as is anything implementation specific such as this).