we build cobalt(RELEASE_9) with asan by "-fsanitize=address -fno-omit-frame-pointer " on our target(arm).
but our target is not powerful, CPU has 4 core with clock 1GHz.
but cobalt always quit after render Youtube logo(red UI).
we found cobalt always got "h5vcc://network-failure?retry-url=https%3A%2F%2Fwww.youtube.com%2Ftv%3Fwired%3D1%26launch%3Dmenu%26additionalDataUrl%3Dhttp%3A%2F%2Flocalhost%3A40167%2
FdialData%3Fapp%3DYouTube%26env_isVideoInfoVisibl" in cobalt::browser::H5vccURLHandler::HandleURL()
could you give us some suggestion how cobalt can start properly with asan?
Related
I'm using the Intel SDK for OpenCL with an Intel HD Graphics 4000 GPU to successfully run an OpenCL program. I've made sure to link against the Intel OpenCL libraries since I also have Nvidia libraries installed.
However, putting a printf() call in the kernel gives the OpenCL compiler error
error: implicit declaration of function 'printf' is not allowed in OpenCL
Also, I've enabled OpenCL kernel debugging in the Visual Studio 2012 plugin, and passed the following options to clBuildProgram:
"-g -s C:\\Path\\to\\my\\program.cl"
However, kernel breakpoints are skipped. Hovering over the breakpoint gives the message:
The breakpoint will not currently be hit. No symbols have been loaded for this document.
My kernels are in a separate .cl file, and I'm setting the breakpoints the way I would for C/C++ code. Is this the correct way to set breakpoints using the Intel SDK for OpenCL debugger?
Why are printf() calls and breakpoints not working with the Intel SDK for OpenCL?
THe function printf() was introduced in the OCL version 1.2. Intel released this version not that long time ago. I'd bet that you still have the 1.1 version.
Regarding the debugger I almost never used it but based on this document the path is supposed to be given like that:
"-g -s \"C:\\Path\\to\\my\\program.cl\""
You are also supposed to choose which thread you wanna debug.
I have an OpenCV application, with additional CUDA(.cu) files which I would like to debug using Parallel NSight. NSight debugging works on CUDA samples (without OpenCV .cpp files), but when I try to start the debugger in my application the debugger loads lots of additional modules ("no symbols loaded") and crashes with this error:
OpenCV Error: Gpu API call (out of memory) in unknown function, file ..\.\
opencv-2.4.4\modules\core\src\gpumat.cpp, line 1415
Also, a window gets opened: "Microsoft Visual c++ Debug Library", with: "Debug error!" and "R6010 abort has been called".
What could be the issue? Could loading of this modules be avoided? I am not sure that they are necessary.
And how to correctly debug CUDA kernels? I know CPU and GPU code cannot be debugged at the same time.
Edit:
I am pretty sure that loading of more than 200 kernels makes it crash. Single gpu::GpuMat declaration has more than 100 kernels(modules) on its own, then SURF, BFM and similar algorithms run the rest...
I´d like to debug only kernels in which I put breakpoints (i.e. my own kernels, not OpenCV ones). Is it possible to exclude other modules/kernels somehow?
Thanks!
It sounds like symbols have been compiled for all of your OpenCV kernels, and this is not what you want. Make sure you are not building OpenCV with CUDA debug flags. Specifically, you don't want the -g/-G/--debug* flags being passed to nvcc.
Debugging a lot of kernels, while having effects on performance, should not cause crashes. I would recommend upgrading to Nsight 3.0 which is available now from the Nsight Visual Studio Edition Early Access site. Many improvements have been made in this version.
I am using the u-boot-2011.12 on my OMAP3 target, the cross tool chain is CodeSourcery arm-none-linux-gnueabi, I compiled u-boot, downloaded it onto the target and booted it, everything went fine,but I have some questions about the u-boot relocation feature, we know that this feature is base on PIC(position independent code), position independent code is generated by setting the -fpic flag to gcc, but I don't find fpic in the compile flags. Without the PIC, how can u-boot implement the relocation feature?
Remember when u-boot is running there is no OS yet. It doesn't really need the 'pic' feature used in most user applications. What I'll describe below is for the PowerPC architecture.
u-boot is initially running in NV memory (NAND or NOR). After u-boot initializes most of the peripherals (specially the RAM) it locates the top of the RAM, reserves some area for the global data, then copies itself to RAM. u-boot will then branch to the code in RAM and modify the fixups. u-boot is now relocated in RAM.
Look at the start.S file for your architecture and find the relocate_code() function. Then study, study, study...
I found this troubling too, and banged my head around this question for a few hours.
Luckily I stumbled upon the following thread on the u-boot mailing list :
http://lists.denx.de/pipermail/u-boot/2010-October/078297.html
What this says, is that at least on ARM, using -fPIC/-fPIE at COMPILE TIME is not necessary to generate position independent binaries. It eases the task of the runtime loader by doing as most work up-front as possible, but that's all.
Whether you use fPIC or not, you can always use -pic / -pie at LINK TIME, which will move all position-dependent references to a relocation section. Since no processing was performed at COMPILE TIME to add helpers, expect this section to be larger than when using -fPIC.
They conclude that for their purposes using -fPIC does not have any significant advantage over a link-time only solution.
[edit] See commit u-boot 92d5ecba for reference
arm: implement ELF relocations
http://git.denx.de/cgi-bin/gitweb.cgi?p=u-boot.git;a=commit;h=92d5ecba47feb9961c3b7525e947866c5f0d2de5
When I try to debug an arbitrary CUDA application, e.g. the matrix multiplication or convolutionSeparable sample from the Nvidia GPU Computing SDK 4.0, I always get an output similar to:
Parallel Nsight Debug
CUDA grid launch failed: CUcontext: 2059192 CUmodule: 348912936 Function: _Z9matrixMulILi32EEvPfS0_S0_ii
……
……
And a file with the following content is showing up:
Parallel Nsight CUDA Debugger
The application being debugged with the Nexus CUDA debugger, was unable to
find any associated source. This could be for a number of reasons:
1) CUDA has not been initialized.
Make sure cuInit has been called, and it returned a successful result.
2) No CUDA contexts have been created.
Once a context is created, memory can be examined in the context. Each context
shows up as a single "Thread" in the Visual Studio Threads view. (Debug | Windows | Threads)
3) There are no active CUDA grids in any context.
A grid must be launched in order to hit breakpoints.
4) You have selected the "Default Context" in the Visual Studio Threads view.
This context is a placeholder shown when there are no available actual CUDA
contexts. It does not show real data.
5) No CUDA modules have been loaded.
You can see which modules are loaded in each CUDA context by showing the
Visual Studio Modules view. (Debug | Windows | Modules)
6) Symbolics were not found for the loaded .cubin.
The module needs to be built with debug information. Please specify the
-G0 switch when building.
7) A grid launch failed while running a kernel.
Each breakpoint within the corresponding “.cu” file is completely ignored during the run. When I just run the application, without Nsight Debugging, the program executes without any problems.
What can I do to tackle this problem?
My Setup:
1xIntel GPU and 1x NV 570GTX, I want to use the local debugging option
Win 7. Pro 64Bit
Dev Env.: VS2008 or VS2010
CUDA 4.0 & Parallel Nsight 2.0
NV Driver Vers.: 285.38
WPF is disabled
TDR is disabled
Windows runs in Basic mode (no aero)
Project Propertys: Cuda Runtime API -> GPU-> Generate GPU Debug Information -> Yes (-G0)
Firstly, you need to ensure that your display is driven by the Intel integrated graphics and not the NVIDIA GPU. This is because when you hit a breakpoint in CUDA code you are stalling the entire GPU, so if the same GPU was used for display then your system would lock up naturally.
Note that the hardware requirements for Parallel Nsight indicate you need two supported GPUs whereas you only have one, but if I understand correctly it's possible to use a non-Intel GPU for display (I haven't tried).
Assuming the above is working you should start by trying out the samples included with Parallel Nsight. You can find them in the Parallel Nsight menu group in the start menu.
CUDA Grid Launch has a wide variety of causes. This one is probably accessing an array beyond its allocated size. what in the x86 world is called a segmentation fault. i debug these by selectively commenting out parts of the kernel you are testing until the error goes away. (what we used to call wolf fence debugging). Another cause of grid launch failure is if the kernel is taking too long (1 or 2 seconds) to execute.
the reason the debugger isnt helping is that the debugger ONLY stops 1 thread in 1 block! your access error is coming before then. also you cant use the printf to find the bug as the output does not get returned in the event of a grid launch failure.
To add potential solution on top of the answers given already, one way to avoid the error is to run the NSight monitor with administrator right.
The answer for this is definitely using the correct driver for the installation of Parallel NSight. For the latest version (2.1 RC2, currently), this is driver version 285.86. For the current stable version 2.0, this is driver version 270.81, as another poster mentioned.
I'm running a CUDA library that I need to debug for memory problems and other issues. But when I attach cuda-gdb to the process I get the error
error: All CUDA devices are used for X11 and cannot be used while debugging.
I understand the error, but there has to be a way that I can debug the issues. Since I only have 1 GPU, it really isn't practical to turn off X11.
On non Nvidia hardware I thought there was a way to emulate a cuda gpu. could this be setup for debugging even though I have an NVIDIA gpu?
First of all, as you are using Linux you're in a lucky position as you can kill X pretty easily just for the time of debugging.
However, if you really want to stick to running X while debugging you are out of luck, as this is not possible for a very good reason: the display driver has a protection mechanism called watchdog timer which is enabled when the GPU in use also drives a display. The watchdog timer interrupts any kernel that runs for longer AFAIR 5s. This is intended to prevent GPU lockups.
Alternatively, you could try using ocelot, but I am not sure how good are the debugging features it provides.