Salutations.
In general, calls to new keywords should be properly encoded with corresponding cleanup calls, such as delete (for example).
Question: ALSO IN Debug mode is it mandatory to place calls to corresponding new / delete? (or the IDE takes care of everything)
Thank you in advance,
Michele.
Related
In a C++ Win32 app I write a large file by appending blocks about 64K using a code like this:
auto h = ::CreateFile(
"uncommited.dat",
FILE_APPEND_DATA, // open for writing
FILE_SHARE_READ, // share for reading
NULL, // default security
CREATE_NEW, // create new file only
FILE_ATTRIBUTE_NORMAL, // normal file
NULL); // no attr. template
for (int i = 0; i < 10000; ++i) { ::WriteFile(h, 64K);}
As far as I see if the process is terminated unexpectedly, some blocks with numbers i >= N are lost, but blocks with numbers i < N are valid, and I can read them when the app restarts, because the blocks themselves are not corrupted.
But what happens if the power is reset? Is it true that entire file can be corrupted, or even have zero length?
Is it a good idea to do
FlushFileBuffers(h);
MoveFile("uncommited.dat", "commited.dat");
assuming that MoveFile is some kind of an atomic operation, and when the app restarts open "commited.dat" as valid and delete "uncommited.dat" as corrupted. Or is there a better way?
MoveFile can work all right in the right situation. It has a few problems though--for example, you can't have an existing file by the new name.
If that might occur (you're basically updating an existing file you want to assure won't get corrupted by making a copy, modifying the copy, then replacing the old with the new), rather than MoveFile you probably want to use ReplaceFile.
With ReplaceFile, you write your data to the uncommitted.dat (or whatever name you prefer). Then yes, you probably want to do FlushFileBuffers, and finally ReplaceFile to replace the old file with the new one. This makes use of the NTFS journaling (which applies to file system metadata, not the contents of your files), assuring that only one of two possibilities can happen: either you have the old file (entirely intact) or else the new one (also entirely intact). If power dies in the middle of making a change, NTFS will use its journal to roll back the transaction.
NTFS does also support transactions, but Microsoft generally recommends against applications trying to use this directly. It apparently hasn't been used much since they added it (in Windows Vista), and MSDN hints that it's likely to be removed in some future version of Windows.
For append only scenario you can split data in blocks (constant or variable size). Each block should be accompanied with some form of checksum (SHA, MD5, CRC).
After crash you can read sequentially each block and verify it's checksum. First damaged block and all following it should be treated as lost (eventually you can inspect them and recover manually).
To append more data, truncate file to the end of last correct block.
You can write two copies in parallel and after crash select one with more good blocks.
I've read most of the manual and am slowly getting my head around the things I need to make major-modes, etc. I've not ran into anything that explains the loop/cycle that Emacs goes through to apply the major mode (or minor-mode even).
For example: I type if while in go-mode and suddenly if is syntax-highlight. I know that just typing common letters amounts to self-insert-command. So how does emacs then react to the change in the buffer unless either self-insert-command fires and event or just changing the buffer fires and event?
W.r.t syntax highlighting, this is triggered by any change to the buffer, no matter which command is used. To do this, the package taking care of keeping the highlighting up-to-date (typically jit-lock on behalf of font-lock) uses after-change-functions. See C-hv after-change-functions RET and also check the corresponding documentation in the Emacs Lisp reference manual (reachable from the "Help" menu).
I need to find where some things happen during execution. Let's say, I'm looking for a line, which drops out user authorizition. It may be some processor like "security/logout", but not neccessary.
How should I debug modx revo ? debug_backtrace() gives me 30+ Mb of text, it's not real to read it.
How to watch quickly all user code, which is stored in database, during execution ?
Either you can turn on debug option on System Settings, or hack a bit the core:
http://www.virtudraft.com/blog/unofficial-parser-timer-for-modx-revolution.html
I have a VB6 application that I'm trying to make log out differently. What I have is a flag in the registry (existing) which states if the application is set to Debug mode so that it would log out.
Within my code I then have lots of if statements checking if this is true. This means that there is a lot of processing time checking if a statement is true, which maybe not much really but as it does it so often it's an overhead I would like to reduce.
The code is full of statements like this
If isDebug = True Then
LogMessage("Log what is happening")
End If
So what I'm looking for is a better way to do this. I know I can set a debug mode within Project Properties -> Make, but this needs to be set prior to building the .exe and I want to be able to set this in production via the registry key.
Consider using a command line argument to set debug mode. I used to do this.
Dim sCommandLine() As String
sCommandLine = Split(Command$)
For I = 0 To UBound(sCommandLine)
' do something with each arg
Next I
You can also persist command line args inside the IDE, so you always have them when debugging. When running outside of the IDE, make a shortcut to the compiled application with the arguments in it.
I do something almost identical to what you have in mind in a lot of my code. Add this:
Sub LogDebug(ByVal strMsg As String)
If (isDebug) Then
LogMessage(strMsg)
End If
End Sub
Then just call LogDebug in your main program body, or call LogMessage directly if it's something you always want to log, regardless of the debug flag.
I'm assuming isDebug is a boolean here. If it's a function call, you should just create a global flag that you set at the beginning of the code, and check that instead of looking at the registry over and over. I don't think checking a boolean is that much of a processing load, is it?
You want to call a function if a runtime flag is set. The only thing I can see that could be faster is:
If isDebug Then
LogMessage("Log what is happening")
End If
But I doubt that either would be the cause of performance problems. Most logging frameworks promote code like that and even put the flag/log level as a parameter to the function. Just be sure that you don't have other places where you needlessly compute a log message outside of the conditional statement.
You might evaluate why you need logging and if the logs produced are effective for that purpose.
If you are looking for a problem that can be trapped using VB error handling, consider a good error handling library like HuntERR31. With it you can choose to log only errors instead of the debug message you are now doing. Even if you don't use the library, the docs have a very good description of error handling in VB.
Another answer still:
Read your registry flag into your app so that it's a session based thing (i.e. when you close and restart the app the flag will be checked again - there's no point in checking the registry with every single test).
Then (as per Tom's post) assign the value to a global variable and test that - far faster than a function.
To speed up logging you may want to consider dimensioning a string buffer in your app and, once it has reached a specific size, fire it into your log file. Obviously there are certain problems with this approach, namely the volatility of the memory, but if you want performance over disk access I would recommend such an approach.
This would, of course, be a lot easier if you could show us some code for your logging process etc.
I am trying to start using erlang:trace/3 and the dbg module to trace the behaviour of a live production system without taking the server down.
The documentation is opaque (to put it mildly) and there don't appear to be any useful tutorials online.
What I spent all day trying to do was capture what was happening in a particular function by trying to apply a trace to Module:Function using dbg:c and dbg:p but with no success at all.
Does anyone have a succinct explanation of how to use trace in a live Erlang system?
The basic steps of tracing for function calls are on a non-live node:
> dbg:start(). % start dbg
> dbg:tracer(). % start a simple tracer process
> dbg:tp(Module, Function, Arity, []). % specify MFA you are interested in
> dbg:p(all, c). % trace calls (c) of that MFA for all processes.
... trace here
> dbg:stop_clear(). % stop tracer and clear effect of tp and p calls.
You can trace for multiple functions at the same time. Add functions by calling tp for each function. If you want to trace for non-exported functions, you need to call tpl. To remove functions, call ctp or ctpl in a similar manner. Some general tp calls are:
> dbg:tpl(Module, '_', []). % all calls in Module
> dbg:tpl(Module, Function, '_', []). % all calls to Module:Function with any arity.
> dbg:tpl(Module, Function, Arity, []). % all calls to Module:Function/Arity.
> dbg:tpl(M, F, A, [{'_', [], [{return_trace}]}]). % same as before, but also show return value.
The last argument is a match specification. You can play around with that by using dbg:fun2ms.
You can select the processes to trace on with the call to p(). The items are described under erlang:trace. Some calls are:
> dbg:p(all, c). % trace calls to selected functions by all functions
> dbg:p(new, c). % trace calls by processes spawned from now on
> dbg:p(Pid, c). % trace calls by given process
> dbg:p(Pid, [c, m]). % trace calls and messages of a given process
I guess you will never need to directly call erlang:trace, as dbg does pretty much everything for you.
A golden rule for a live node is to generate only an amount of trace output to the shell, which lets you to type in dbg:stop_clear().. :)
I often use a tracer that will auto-stop itself after a number of events. For example:
dbg:tracer(process, {fun (_,100) -> dbg:stop_clear();
(Msg, N) -> io:format("~p~n", [Msg]), N+1 end, 0
}).
If you are looking for debugging on remote nodes (or multiple nodes), search for pan, eper, inviso or onviso.
On live systems we rarely trace to shell.
If the system is well configured then it is already collecting your Erlang logs that were printed to the shell. I need not emphasize why this is crucial in any live node...
Let me elaborate on tracing to files:
It is possible to trace to file, which will produce a binary output that can be converted and parsed later. (for further analysis or automated controlling system, etc.)
An example could be:
Trace to multiple files wrapped (12x50 Mbytes).Please always check the available disk space before using such a big trace!
dbg:tracer(port,dbg:trace_port(file,{"/log/trace",wrap,atom_to_list(node()),50000000,12})).
dbg:p(all,[call,timestamp,return_to]).
Always test on a test node before entering anything to a live node's shell!
It is most advised to have a test node or replica node to try the scripts first.
That said let's have a look at a basic tracing command sequence:
<1> dbg:stop_clear().
Always start by flushing trace ports and ensuring that no previous tracing interferes with the current trace.
<2> dbg:tracer().
Start the tracer process.
<3> dbg:p(all,[call, timestamp]).
In this case we are tracing for all processes and for function calls.
<4> dbg:tp( ... ).
As seen in Zed's answer.
<5> dbg:tpl( ... ).
As seen in Zed's answer.
<42> dbg:stop_clear().
Again it is to ensure that all traces were written to the output and to evade any later inconvenience.
You can:
add triggers by defining some fun()-s in the shell to stop the trace at a given time or event. Recursive fun()-s are the best to achieve this, but be very careful when applying those.
apply a vast variety of pattern matching to ensure that you only trace for the specific process with the specific function call with the specific type of arguments...
I had an issue a while back, when we had to check the content of an ETS table and on appearance of a certain entry we had to stop the trace within 2-3 minutes.
I also suggest the book Erlang Programming written by Francesco Cesarini. (Erlang Programming # Amazon)
The 'dbg' module is quite low-level stuff. There are two hacks that I use very
frequently for the tasks that I commonly need.
Use the Erlang CLI/shell expansion code at http://www.snookles.com/erlang/user_default.erl. It was originally written (as far as I know) by Serge Aleynikov and
has been a useful "so that's how I add custom functions to the shell" example. Compile
the module and edit your ~/.erlang file to point to its path (see comment at the top
of the file).
Use the "redbug" utility that's bundled with in the EPER collection of utilities.
It's very easy to use 'dbg' to create millions of trace events in a few seconds. Doing
so in a production environment can be disastrous. For development or production use,
redbug makes it nearly impossible to kill a running system with a trace-induced overload.
If you would prefer a graphical tracer then try erlyberly. It allows you to select the functions you would like to trace (on all processes at the moment) and deals with the dbg API.
However it does not protect against overload so is not suitable for production systems.