Gwan : Debug option for C - debugging

I have a signal 11 after a while with a c script in a connection handlers.
I have no memory problem and unit test work well !
I'd like to know if there is a debug mode with gwan for c scripts ? and how to activate it ?
Thanks by advance
Regards

Because they are at the core of Web development, servlets provide automatic crash reports (see the crash.c example).
But connection handlers lacked this feature introduced in G-WAN v3.6+. In this version and more recent releases, it can be enabled by a #pragma debug directive in the handler source code.
There's also a new and very detailled per-thread dump in v3.9+ to get a wider 'picture' of servlet/handler/libraries/server errors. It is stored in the ./trace file.
Your crash dump reports a libc call error. The main source of such errors has been found in memory allocations. G-WAN v3.4+ tried to catch them more effectively by using its own (wait-free) allocator.
BTW, siege is not performing very well whether this is on single-Cores or on multicore systems (for performance tests, weighttp will let you test the server instead of the client).

Related

gsoap C SOAP server hijacks memory even after using soap_end() call

For a project, I have implemented a gsoap(v2.8.83) FastCGI server with a C application. It is hosted as FastCGI on lighttpd webserver. It serves SOAP request and populates SOAP response with items available on the device. The number of items are varying with time. The functionality works fine but the FastCGI process seems to hijack some memory.
I expected the gsoap generated code to handle the release of all memory used to serve the request once I call soap_destroy() and soap_end(). But that does not happen.
For same number of items in the SOAP response, the memory held by the process is same (more than the memory held before serving the request) over multiple requests. However, as the number of items increases in the SOAP response, the memory held by the process also increases and this is an issue for the project as it runs on an embedded device.
I have enabled DEBUG option provided by gsoap and verified from the logs that all allocated memory is freed.
I have also tried compiling my FastCIG with few optimization options provided by gsoap and running it as an independent server (isolated from lighttpd) but there is no improvement. I am not able to use the WITH_LEAN compilation flag as I need data-time serialization.
gsoap memory leak C applications posts a similar issue but I have tried using the auto-generated functions as suggested in the answers for allocating the elements with no luck.
From the source code, I feel that the server is retaining some attribute data even after calls to soap_end() but not quite able to nail down as the debug logs does not give any clue on this.
Anyone understands what could be happening?
Any help is much appreciated.
Regards,

Is profiling enabled by default for all go programs?

Since importing net/http/pprof package is enough to enable the golang profiler in a program, my question is, is the profiler enabled by default for all golang programs, and importing the package just exposes the metrics endpoint? If that is the case, how could I disable the profiler to eliminate performance overhead?
Is profiling enabled by default for all go programs?
No.
is the profiler enabled by default for all golang programs, and importing the package just exposes the metrics endpoint?
No.
If that is the case, how could I disable the profiler to eliminate performance overhead?
No, that's not the case.
If you look at the source code of the pprof.go file, you can see in its handlers, that the profiling, tracing etc. are running only when you do a GET on any of its endpoints.
If you specify a time, for example: http://localhost:6060/debug/pprof/trace?seconds=10 the server will take 10 seconds to respond with the trace data. So the profiling is happening only if you call on an endpoint.
You can find some examples in the first comment block of the pprof.go file.

WebSphere - dumps generation system signal vs server script

I am looking for an explanation in differences between methods generating thread and heap dumps.
What I know so far:
system signal eg. kill -3 triggers instant creation of both (thread and heap dump)
script shipped with Liberty does run java agent which does magic and generates customizable output: thread dump alone or together with heap dump or core dump (or even with both)
server javadump myserver --include=thread,heap,system
https://www.ibm.com/support/knowledgecenter/SSEQTP_liberty/com.ibm.websphere.wlp.doc/ae/rwlp_command_server.html
..so my questions are:
what's better and why?
is there any difference in generated dumps?
which one would you use for providing exposed and automated way of dumps creation (eg. for developers)?
anyone has any experience with my previous point? I would highly appreciate your ProTips
..and also anything you might consider worth to mention here.
PS
What I've noticed. If I do system signal multiple times in a row nothing hangs and the number of generated dumps is equal to number of attempts made. The same happens if I do the same using script based solution (of course it takes longer).
..but if I do kill -3 <PID> ; server javadump myserver --include=thread,heap then server hungs and dumps are not generated - this state is unrecoverable without a restart. <- I've not spent much time on this behaviour so it could be just a failure unrelated to commands performed.
Thank you and best regards!

Risc-V: Minimum CSR requirements for simple RV32I implementation capable of leveraging GCC

What would be the bare minimum CSR requirements for a RV32I capable of running machine code generated with GCC?
I'm thinking of a simple fpga-based (embedded) implementation. No virtual memory or linux support is required.
Also, what GCC flags should I use in order to prevent it from using unimplemented CSR related instructions?
I'm still quite confused after scanning through the RISCV Privileged ISA Specification.
Thanks!
Have a look at the RARS simulator as an example of a simple RISC V implementation.  It implements sufficient CSRs (e.g. the exception cause, processor status, exception pc, vector table address, etc..) that you can program an interrupt handler.
You'll need:
utvec — sets the exception handler address
ustatus — to enable/disable interrupts,
uscratch — needed by software exception handler,
ucause — tells the reason for exception
uepc — tells the address of uncompleted instruction at exception
And some others.  In RARS, you can see the registers implemented in the register display, Control and Status tab.
I believe RARS supports the timer, so has some
CSRs for that.  It also provides a floating point unit, so some CSRs
for exceptions for that as well as rounding configuration.  For
handling memory access exceptions, it has utval.  And then it
offers some counters.  See also table 2.2 in Document Version
20190608-Priv-MSU-Ratified
I would think that your usage of CSRs would be restricted to standalone application configuration, e.g. initial bootup, and interrupt handling, both of which would be written in assembly.
Hard to imagine that compiled C code (object files, .o's) would touch the CSRs in any way.  If you have an example of that, please share it.
In some environments, the C implementation allows for standalone (e.g. unhosted) programs.  It is possible that such a program created by some compiler includes startup configuration and an exception handler though more likely that these would be user supplied.  See, for example, http://cs107e.github.io/guides/gcc/

Memory cache is not working properly

I'm working on a uboot test application that will work with a special DMA engine. The DMA engine will transfer data between memories without "notify" cache. Therefore, I expect that if I keep transferring different data to the same destination, I should get the stale data.
However, I found that I always get the correct data the DMA engine sent. This makes me think that maybe the dcache is not enabled. So I tried the uboot build-in cmd - dcache. It shows my data cache is enabled. And I checked the TLB table and all pages are marked as "write back write allocate". So it means the cache is enabled?
And more interesting thing I found is that, I wrote a simple program that just keeps reading the same address. And I found that by disabling the dcache using the dcache cmd, the time to run the test just tripled. I tried a similar simple test in Linux on the same hardware and the cache can enable more than 15 times performance boost. So this must not be a hardware issue.
In summary, I found that my cache is working to some extent but not fully working. And it might be a configuration issue. Is there any theory can explain what I found? How can I continue to debug... Thanks
Let me answer it myself...
Code in Uboot is a little misleading... it run
set_section_dcache(i, DCACHE_WRITEBACK_WRITETHROUGH)
but after checking the MMU, it turns out that the memory type is set to be device.

Resources