HAL_GetTick() crash mcu - gcc

I created a simple project using STCubeMX for my nucleo-f446ZE(STM32F446ZET6).
The project should be a USB device HID but it fail to start. After messing around with the debugger, I discovered that the MCU PC register go to 0x00000000 or 0xFFFFFFFF or sometimes random invalid value.
I didn't modify any code. I compiled the code with MDK-ARM (modified GCC, Vision IDE), and with GCC (openSTM32) and the same thing happen.
Callstack :
Main
SystemClock_Config
HAL_RCC_ClockConfig (632)
Hal_GetTick
Ps:
RAM got corrupted after 0x080149A and that why the program do weird stuff
Image
Solution
CubeMX didn't setup clocks very well. here is the setup i used to make work the usb.
//RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSI;
RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSE;
//RCC_OscInitStruct.HSIState = RCC_HSI_ON;
//RCC_OscInitStruct.HSICalibrationValue = 16;
RCC_OscInitStruct.HSEState = RCC_HSE_ON;
RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON;
RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSE;
RCC_OscInitStruct.PLL.PLLM = 8;
RCC_OscInitStruct.PLL.PLLN = 192;
RCC_OscInitStruct.PLL.PLLP = RCC_PLLP_DIV4;
RCC_OscInitStruct.PLL.PLLQ = 4;
RCC_OscInitStruct.PLL.PLLR = 2;

The RCC_ClkInitStruct is probably not initialized properly (or at all)

Related

Add i2s Audio in device tree for SAM9x60 board

Our team has a SAM9x60 board and recently add an external audio board (UDA1334A, link: Documents). Unfortunately, this document has guide for Raspberry Pi only, and somehow it's really different with our board device tree. So I have tried myself to add into device tree, mostly based on SAM9x60's Tutorial with another board, but it's really different.
As I understand, the audio board use UDA1334 codec, and I have to add a sound tag to device tree, like SAM9x60 tutorial:
sound {
compatible = "mikroe,mikroe-proto";
model = "wm8731 # sam9x60ek";
i2s-controller = <&i2s>;
audio-codec = <&wm8731>;
dai-format = "i2s";
};
But I haven't found any driver for this card. After look around, I tried with simple-audio-card
sound {
compatible = "simple-audio-card";
simple-audio-card,name = "1334 Card";
simple-audio-card,format = "i2s";
simple-audio-card,widgets = "Speaker", "Speakers";
simple-audio-card,routing = "Speakers", "Speaker";
simple-audio-card,bitclock-master = <&codec_dai>;
simple-audio-card,frame-master = <&codec_dai>;
simple-audio-card,cpu {
#sound-dai-cells = <0>;
sound-dai = <&i2s>;
};
codec_dai: simple-audio-card,codec {
#sound-dai-cells = <1>;
sound-dai = <&uda1334>;
};
};
uda1334: codec#1a {
compatible = "nxp,uda1334";
nxp,mute-gpios = <&pioA 8 GPIO_ACTIVE_LOW>;
nxp,deemph-gpios = <&pioC 3 GPIO_ACTIVE_LOW>;
status = "okay";
};
When booting, I received message:
OF: /sound/simple-audio-card,codec: could not get #sound-dai-cells for /codec#1a
asoc-simple-card sound: parse error -22
asoc-simple-card: probe of sound failed with error -22
So have I do the right way with simple-audio-card? Or any other way? In normal, ALSA recorded a classD sound card, but I think it is just a amplifier. Sorry because I'm an Android SW Developer and have to do the HW job from a quit people.
External Question: I have investigate on Raspberry device tree based on UDA1334 document, it's so different, as I understand, Rasp use HiFiberry Dac already, but how could it work with an external DAC like UDA1334? No external node in device tree I've seen? Look like they just open dtoverlay=hifiberry-dac, dtoverlay=i2s-mmap and it work.

device-tree mismatch: .probe never called

I'm having trouble understanding how device-tree works, or specifically why this driver won't init. This is in the rockchip vendor kernel for android, version 3.10
drivers/watchdog/rk29_wdt.c (reduced for readability)
static const struct of_device_id of_rk29_wdt_match[] = {
{ .compatible = "rockchip,watch dog" }
};
static struct platform_driver rk29_wdt_driver = {
.probe = rk29_wdt_probe,
[..]
.of_match_table = of_rk29_wdt_match,
.name = "rk29-wdt",
},
};
static int __init watchdog_init(void)
{
printk("watchdog_init\n");
return platform_driver_register(&rk29_wdt_driver);
}
and this is the soc dtsi
arch/arm/boot/dts/rk3288.dtsi
watchdog: wdt#2004c000 {
compatible = "rockchip,watch dog";
reg = <0xff800000 0x100>;
clocks = <&pclk_pd_alive>;
clock-names = "pclk_wdt";
interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
rockchip,irq = <0>;
rockchip,timeout = <2>;
rockchip,atboot = <1>;
rockchip,debug = <0>;
status = "okay";
};
however, the .probe function of the driver is never called. It is compiled in and the __init function is called. I suspect it has something to do witch the device tree entry not matching? Maybe the space is an issue?
Or is there anything else that runs before .probe that determines if the driver should continue?
Also i'm not sure how a flattened tree works, so maybe this is relevant:
arch/arm/mach-rockchip/rk3288
DT_MACHINE_START(RK3288_DT, "Rockchip RK3288 (Flattened Device Tree)")
.smp = smp_ops(rockchip_smp_ops),
.map_io = rk3288_dt_map_io,
.init_time = rk3288_dt_init_timer,
.dt_compat = rk3288_dt_compat,
.init_late = rk3288_init_late,
.reserve = rk3288_reserve,
.restart = rk3288_restart,
MACHINE_END
There are a number of possible ways this might happen, and most of them are well away from the driver code itself. Firstly, a .dtsi fragment alone doesn't tell the whole story - the device tree syntax is hierarchical, so the properties (in particular the status) might still be overridden by the board-level .dts which includes a basic SoC .dtsi file. Secondly, the compiled DTB isn't the last word either, since the bootloader may dynamically modify it before passing it to the kernel - this is typically done for memory nodes and SMP enable methods, but could potentially affect anything.
This kind of debugging is often best tackled in reverse, by examining the state of the booted system, then working backwards to figure out how things got that way - the specifics of this particular question rule some of this out already, but for the sake of completeness:
If the kernel knows about the driver, and it's loaded and properly initialised, it should show up somewhere in /sys/bus/*/drivers/ - otherwise, it may be in a module which needs loading, or it may have failed to initialise due to some unmet dependency on some other driver or resource.
If the kernel knows about the device, it should show up somewhere in /sys/bus/*/devices/, and if it's correctly bound to a driver and probed then they should both have a symlink to each other.
If the device is nowhere to be found, then on a DT-based system the next place to check would be /proc/device-tree/ (dependent on CONFIG_PROC_DEVICETREE on older kernels, and canonically found in /sys/firmware/devicetree/base/ on newer ones) - this will show the view of the DT as the kernel found it, and a bit of poking around there should hopefully make clear any missing nodes or out-of-place properties, such as a disabled node causing the kernel to skip creating a device altogether. Beware that the property files themselves are just the raw data - so you probably want to go snooping with hexdump rather than cat - and that all numeric cells are in big-endian byte order.
I notice that in your definition you miss so called SENTINEL in your array, null empty struct.
Look here an example:
static const struct of_device_id clk_ids[] = {
{ .compatible = "sirf,atlas7-clkc" },
{},
};

Arduino not writing to file

I'd like the arduino to write to a file whenever an ajax call is made. The ajax works, but it doesn't write to the file. All other code inside the ajax handler does execute.
void handle_ajax(){
int startUrlIndex= HTTP_req.indexOf("button");
int endUrlIndex = HTTP_req.indexOf(" HTTP");
String url = HTTP_req.substring(startUrlIndex, endUrlIndex);
int startButtonIndex = url.indexOf("device-") + 7;// 7 is length of device-, I really just want the number.
int endButtonIndex = url.indexOf("&");
String button = url.substring(startButtonIndex, endButtonIndex);
int startStateIndex = url.indexOf("state=") + 6; // 6 is length of state=, I really just want the number.
String state = url.substring(startStateIndex);
int device = button.toInt();
int newState = state.toInt();
dim_light(device, newState * 12);
write_config("", "text");
}
bool write_config(String line, String text){
configFile = SD.open("config.ini", FILE_WRITE);
if(configFile){
configFile.write("Dipshit");
}
configFile.close();
Serial.println("Works.");
return true;
}
I don't see anything wrong with the code provided.
Check the basics first:
SD card is either a standard SD card or a SDHC card.
SD card is formatted with a FAT16 or FAT32 file system.
The correct pin has been used for the CS pin in the SD.begin() command. This depends on the hardware used. http://www.arduino.cc/en/Reference/SDCardNotes
The SPI is wired up correctly (pins 11, 12, and 13 on most Arduino boards).
The hardware SS pin is set as an output (even if it isn't used as the CS pin).
I know from past experience that these little Arduinos can run out of SRAM quite quickly when reading and writing to an SD card. The ReadWrite example program uses about 50% of the UNOs SRAM alone!!
To test if this is your problem, run the SD card read/write example program (with the correct CS pin in the SD.begin() command). If this works then the problem is likely that you have run out of SRAM. Try using an Arduino MEGA 2560 instead which has 4x the amount of SRAM.
Edit: The latest Arduino IDE (v1.6.8) actually calculates how much SRAM is used by global variables. It does not take into account local variables.
Found the problem: Ram
The arduino had insufficient ram at the point of opening the SD card resulting in a failure.
If anyone else ever encounters the same issue, you need 300 or more bytes of ram. Check this by serial printing FreeRam()

Nsight 2.2 sometimes works sometimes doesn't

I have problem about Parallel Nsight 2.2 debugger. It is very strange and I don't know how to describe it. Anyway, It works sometimes and sometimes doesn't.
What I observed is, that it works with dynamic array(this array has no effect on cuda_kernels or any other functon like cudaMemcpy atc...) named with 3 elements. And this is importnat... If I set size on 4+, it just falls down, no errors, nothing just fall down.
Interesting fact is, that if I run it normally via normal debugger hole program works correctly with right results. Also interesting fact is, that when set this array as static
unsigned topology[4];
and set in same values Nsight debugger works but very slowly.
So first of all I commented all cuda source code (like kernels and all cuda functions) but still same - it falls down. So I started to comment more host_code and I found loop (in host code) which does this creepy thing. So when program in Nsght-debug reach loop(under text) it falls down, BUT, when I write command in this loop to print number of each loop on screen, it runs, loop is finished, hole program is finished and then debugger told me:
Debug Assertion Failed!
Program:
File:f:\dd\vctools\crt_bld\self_x86\crt\src\dbgheap.c
Line: 1322
Expression: _CrtIsValidHeapPointer(pUserData)
.... I don't even have disk f ... so wtf???
Anyway, on normal debugger it runs fine and with right results.
This is mentioned loop and dynamic array *topology:
unsigned *topology;
unsigned numberOfLayersInput = 5;
topology = new unsigned [numberOfLayersInput];
topology[0] = 784;
topology[1] = 1000;
topology[2] = 800;
topology[3] = 300;
topology[4] = 10;
kernelTopology_ *topologyOfKernels;
topologyOfKernels = new kernelTopology_ [numberOfLayersInput - 1];
for (int i = 0, numberOfThreads; i < numberOfLayersInput; i++)
{
cout <<i << endl; // this is the added line!
numberOfThreads = fixedTopology[i];
topologyOfKernels[i].size = numberOfThreads;
if(numberOfThreads > THREADS_PER_BLOCK)
topologyOfKernels[i].BLOCK_SIZE = THREADS_PER_BLOCK;
else topologyOfKernels[i].BLOCK_SIZE = numberOfThreads;
if(numberOfThreads <= THREADS_PER_BLOCK)
topologyOfKernels[i].GRID_SIZE = 1;
else if(fixedTopology[i] % topologyOfKernels[i].BLOCK_SIZE == 0)
topologyOfKernels[i].GRID_SIZE = fixedTopology[i] / topologyOfKernels[i].BLOCK_SIZE;
else
topologyOfKernels[i].GRID_SIZE = (fixedTopology[i] / topologyOfKernels[i].BLOCK_SIZE) + 1;
}
I can't see any mistakes in this code... also normal debugger has no problem with it.
I have reinstalled graphics drivers, CUDA toolkit, CUDA SDK and Paralell Nsight but it does same creepy things. By the way I use Win 7 64 bit and VS2010.
Does have anyone any ideas what I should do with this?
Please, let me know if someone has any idea :)
The error
Debug Assertion Failed! Program: File:f:\dd\vctools\crt_bld\self_x86\crt\src\dbgheap.c Line: 1322
is from the Microsoft C runtime function _CrtIsValidHeapPointer. The default debug build adds additional heap and stack checks into the code. This function is used to verify that a specified pointer is in the local heap. The path f:... is the location of the source file in the C runtime. This function is at the time Microsoft built the library.
The assertion indicates an out of bounds memory access. The cause of the error appears to be incorrect allocation of topologyOfKernels.
corruption.topologyOfKernels = new kernelTopology_ [numberOfLayersInput - 1];
should be allocating numberofLayersInput elements.
corruption.topologyOfKernels = new kernelTopology_ [numberOfLayersInput];

ExtCreatePen and Windows 7 GDI

I created DIBPATTERN pens with ExtCreatePen API for custom pattern pens.
It sucessfully draws desired lines on Windows XP,
But on Windows 7 (x64 for my case), it does not draw any lines; no changes on screen.
(Other simply created pens, for example CreatePen(PS_DOT,1,0), are working.)
I found that calling SetROP2(hdc, R2_XORPEN) makes the following line-drawing API calls draw something but with XOR operation. I don't want XOR drawing.
Here is my code to create the pen. It has no problem on Windows XP:
LOGBRUSH lb;
lb.lbStyle = BS_DIBPATTERN;
lb.lbColor = DIB_RGB_COLORS;
int cb = sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 2 + 8*4;
HGLOBAL hg = GlobalAlloc(GMEM_MOVEABLE, cb);
BITMAPINFO* pbmi = (BITMAPINFO*) GlobalLock(hg);
ZeroMemory(pbmi, cb);
pbmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
pbmi->bmiHeader.biWidth = 8;
pbmi->bmiHeader.biHeight = 8;
pbmi->bmiHeader.biPlanes = 1;
pbmi->bmiHeader.biBitCount = 1;
pbmi->bmiHeader.biCompression = BI_RGB;
pbmi->bmiHeader.biSizeImage = 8;
pbmi->bmiHeader.biClrUsed = 2;
pbmi->bmiHeader.biClrImportant = 2;
pbmi->bmiColors[1].rgbBlue =
pbmi->bmiColors[1].rgbGreen =
pbmi->bmiColors[1].rgbRed = 0xFF;
DWORD* p = (DWORD*) &pbmi->bmiColors[2];
for(int k=0; k<8; k++) *p++ = patterns[k];
GlobalUnlock(hg);
lb.lbHatch = (LONG) hg;
s_aSelectionPens[i] = ExtCreatePen(PS_GEOMETRIC, 1, &lb, 0, NULL);
ASSERT(s_aSelectionPens[i]); // success on both XP and Win7
GlobalFree(hg);
Is it bug only on my PC? Please check this problem.
Thank you.
This is a known bug with the Windows 7 GDI, though good luck getting Microsoft to acknowledge it.
http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/a70ab0d5-e404-4e5e-b510-892b0094caa3
-Noel
I will admit, I was dubious as first, but I compiled and ran your program, and it does indeed fail to draw the second line on Windows 7, buy only in aero mode
By switching to Windows basic or classic mode, all four lines are drawn, as expected.
I can only assume that this is some kind of bad interaction with your custom pen and the new way aero mode implements GDI calls. This seems like it might be a Microsoft bug, perhaps you can post this question on one of their message boards?
So you are creating an 8x8 black/white (monochrome) bitmap as a DIB, and then using that to create a pen. I see nothing wrong with this code. this definitely looks like a windows bug, but there may be a workaround.
Try setting
pbmi->bmiHeader.biClrUsed = 0;
pbmi->bmiHeader.biClrImportant = 0;
In this context, setting the values to 0 should mean the same thing as setting them to 2, but 0 is more standard behavior for situations where you have are using the full palette. You still need two entries in your palette, 0 just means "full size based on biBitCount".
Also, each palette entrie is a RGBQUAD, which means there is room for alpha, and your alpha is set to 0, which should be ignored, but maybe it isn't. so try setting the high byte of your two palette entries to 0xFF or 0x80.
Finally, it's possible that your palette is being ignored entirely, and Windows is using the BkMode, BkColor and TextColor of the destination DC for everything, so you need to make sure that they are set to values that you can see.
My guess is that this has something to do with alpha transparency, since GDI ignores alpha entirely, but Aero doesn't.

Resources