xcode change output of console information [duplicate] - xcode

When using the Xcode 8+ and creating a new blank project, the following logs appear when running the application:
2016-06-13 16:33:34.406093 TestiOS10[8209:100611] bundleid: com.appc.TestiOS10, enable_level: 0, persist_level: 0, propagate_with_activity: 0
2016-06-13 16:33:34.406323 TestiOS10[8209:100607] Created DB, header sequence number = 248
2016-06-13 16:33:34.409564 TestiOS10[8209:100611] subsystem: com.apple.UIKit, category: HIDEvents, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0
2016-06-13 16:33:34.504117 TestiOS10[8209:100607] Created DB, header sequence number = 248
2016-06-13 16:33:34.548023 TestiOS10[8209:100607] subsystem: com.apple.BaseBoard, category: MachPort, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0
2016-06-13 16:33:34.568458 TestiOS10[8209:100608] subsystem: com.apple.FrontBoard, category: Common, enable_level: 0, persist_level: 0, default_ttl: 0, info_ttl: 0, debug_ttl: 0, generate_symptoms: 0, enable_oversize: 0, privacy_setting: 0
Maybe someone already found a configuration for this to handle?

Try this:
1 - From Xcode menu open: Product > Scheme > Edit Scheme
2 - On your Environment Variables set OS_ACTIVITY_MODE = disable

Building on the original tweet from #rustyshelf, and illustrated answer from iDevzilla, here's a solution that silences the noise from the simulator without disabling NSLog output from the device.
Under Product > Scheme > Edit Scheme... > Run (Debug), set the OS_ACTIVITY_MODE environment variable to ${DEBUG_ACTIVITY_MODE} so it looks like this:
Go to your project build settings, and click + to add a User-Defined Setting named DEBUG_ACTIVITY_MODE. Expand this setting and Click the + next to Debug to add a platform-specific value. Select the dropdown and change it to "Any iOS Simulator". Then set its value to "disable" so it looks like this:

OS_ACTIVITY_MODE didn't work for me (it may have been because I typo'd disable as disabled, but isn't that more natural?!?), or at least didn't prevent a great deal of messages. So here's the real deal with the environment variables.
https://llvm.org/svn/llvm-project/lldb/trunk/source/Plugins/Platform/MacOSX/PlatformDarwin.cpp
lldb_private::Error
PlatformDarwin::LaunchProcess(lldb_private::ProcessLaunchInfo &launch_info) {
// Starting in Fall 2016 OSes, NSLog messages only get mirrored to stderr
// if the OS_ACTIVITY_DT_MODE environment variable is set. (It doesn't
// require any specific value; rather, it just needs to exist).
// We will set it here as long as the IDE_DISABLED_OS_ACTIVITY_DT_MODE flag
// is not set. Xcode makes use of IDE_DISABLED_OS_ACTIVITY_DT_MODE to tell
// LLDB *not* to muck with the OS_ACTIVITY_DT_MODE flag when they
// specifically want it unset.
const char *disable_env_var = "IDE_DISABLED_OS_ACTIVITY_DT_MODE";
auto &env_vars = launch_info.GetEnvironmentEntries();
if (!env_vars.ContainsEnvironmentVariable(disable_env_var)) {
// We want to make sure that OS_ACTIVITY_DT_MODE is set so that
// we get os_log and NSLog messages mirrored to the target process
// stderr.
if (!env_vars.ContainsEnvironmentVariable("OS_ACTIVITY_DT_MODE"))
env_vars.AppendArgument(llvm::StringRef("OS_ACTIVITY_DT_MODE=enable"));
}
// Let our parent class do the real launching.
return PlatformPOSIX::LaunchProcess(launch_info);
}
So setting OS_ACTIVITY_DT_MODE to "NO" in the environment variables (GUI method explained in Schemes screenshot in main answer) makes it work for me.
As far as NSLog being the dumping ground for system messages, errors, and your own debugging: a real logging approach is probably called for anyway, e.g. https://github.com/fpillet/NSLogger .
OR
Drink the new Kool-Aid: http://asciiwwdc.com/2016/sessions/721 https://developer.apple.com/videos/play/wwdc2016/721/
It's not surprising that there are some hitches after overhauling the entire logging API.
ADDENDUM
Anyway, NSLog is just a shim:
https://developer.apple.com/library/content/releasenotes/Miscellaneous/RN-Foundation-OSX10.12/
NSLog / CFLog
NSLog is now just a shim to os_log in most circumstances.
Only makes sense now to quote the source for the other env variable. Quite a disparate place, this time from Apple internals. Not sure why they are overlapping. [Incorrect comment about NSLog removed]
[Edited 22 Sep]: I wonder what "release" and "stream" do differently than "debug". Not enough source.
https://github.com/macosforge/libdispatch/blob/8e63547ea4e5abbfe55c0c3064181c4950a791d3/src/voucher.c
e = getenv("OS_ACTIVITY_MODE");
if (e) {
if (strcmp(e, "release") == 0) {
mode = voucher_activity_mode_release;
} else if (strcmp(e, "debug") == 0) {
mode = voucher_activity_mode_debug;
} else if (strcmp(e, "stream") == 0) {
mode = voucher_activity_mode_stream;
} else if (strcmp(e, "disable") == 0) {
mode = voucher_activity_mode_disable;
}
}

A tweet had the answer for me - https://twitter.com/rustyshelf/status/775505191160328194
To stop the Xcode 8 iOS Simulator from logging like crazy, set an environment variable OS_ACTIVITY_MODE = disable in your debug scheme.
It worked.

Please find the below steps.
Select Product => Scheme => Edit Scheme or use shortcut : CMD + <
Select the Run option from left side.
On Environment Variables section, add the variable OS_ACTIVITY_MODE = disable
For more information please find the below GIF representation.

This is still not fixed in Xcode Version 8.0 beta 2 (8S162m) for me and extra logs are also appearing in the Xcode console
** EDIT 8/1/16: This has been acknowledged in the release notes for Xcode 8 Beta 4 (8S188o) as an issues still persisting.
Known Issues in Xcode 8 beta 4 – IDE
Debugging
• Xcode Debug Console shows extra logging from system frameworks when
debugging applications in the Simulator. (27331147, 26652255)
Presumably this will be resolved by the GM release. Until then patience and although not ideal but a workaround I'm using is below...
Similar to the previous answer I am having to:
prefix my print logs with some kind of special character (eg * or ^ or ! etc etc)
Then use the search box on the bottom right of the console pane to filter my console logs by inputing my chosen special character to get the console to display my print logs as intended

My solution is to use the debugger command and/or Log Message in breakpoints.
And change the output of console from All Output to Debugger Output like

Alright. There seems to be a lot of commotion about this one, so I'll give y'all a way to persist it without using that scheme trick. I'll address the iOS Simulator specifically, but this also might need to be applied for the TV Sim as well which is located in a different dir.
The problem that is causing all of this stuff are plists located within the Xcode directory. There is a process that gets launched called configd_sim when the Sim starts that reads the plists in and prints debugging information if the plists specify they should be logged.
The plists are located here:
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/Preferences/Logging/Subsystems
If you are playing around with a beta, take note that the dir will be different.
You will see numerous plists in this directory. Now, build and run your application and observe the logs. You are looking for the content immediately followed by the subsystem: part. It is the name immediately following this that represents the corresponding problematic plist.
From there, either modify the plist to knock out the debugging [Level] key/value which is a dictionary containing the "Enable" => "Default" key/value... or just simply delete the plist. Note, that you will need to be root to do either of these since they're located in the Xcode application.
the plutil -p command might be of use to you as well. i.e.
plutil -p /Applications/Xcode.app/Contents/Developer/Platforms/AppleTVSimulator.platform/Developer/SDKs/AppleTVSimulator.sdk/System/Library/Preferences/Logging/Subsystems/com.apple.BackBoardServices.fence.plist
This gave me one of the problematic plists which contained:
{ "DEFAULT-OPTIONS" => { "Level" => { "Enable" => "Default" }}}
Good luck :]

This is related to a known issue with logging found in the Xcode 8 Beta Release Notes (also asked an engineer at WWDC).
When debugging WatchOS applications in the Watch simulator, the OS may produce an excessive
amount of unhelpful logging. (26652255)
There is currently no workaround available, you must wait for a new version of Xcode.
EDIT 7/5/16: This is supposedly fixed as of Xcode 8 Beta 2:
Resolved in Xcode 8 beta 2 – IDE
Debugging
When debugging an app on the Simulator, logs are visible. (26457535)
Xcode 8 Beta 2 Release Notes

This is no longer an issue in xcode 8.1 (tested Version 8.1 beta (8T46g)). You can remove the OS_ACTIVITY_MODE environment variable from your scheme.
https://developer.apple.com/go/?id=xcode-8.1-beta-rn
Debugging
• Xcode Debug Console no longer shows extra logging from system
frameworks when debugging applications in the Simulator. (26652255,
27331147)

In Xcode 10 the OS_ACTIVITY_MODE variable with disable (or default) value also turns off the NSLog no matter what.
So if you want to get rid of the console noise but not of your own logs, you could try the good old printf("") instead of the NSLog since it is not affected by the OS_ACTIVITY_MODE = disable.
But better check out the new os_log API here.

Please note that for iOS 14 Simulator, the OS_ACTIVITY_MODE=disable will not show any logs using the new Swift Logger. You will have to remove or enable it.

I read all solutions, nothing worked so far, I am using XCODE 12.5 at the time of writing this response, it's the most annoying thing to see in a debugger with such a heap/flood of unwanted messages.
The solution I did was a lot simpler than most complex ones out there, here is what I did to solve this annoying issue:
No need to use any environment variables etc, otherwise using these as suggested by the previous posts would disable NSLOG output which isn't what you want in such case.
I made sure that all my NSLOG would contain a String that I would monitor later on, example:
Previously I used to do:
NSLog(#"Hello World");
Now, I do:
NSLog(#"[Admin]: Hello World");
Whereas "[Admin]" is the name of my program (or whatever string you like other than [Admin]"
(Optional Step) or (Easier), I wrote a function to channel (well; proxy) NSLog into another global function "Log" whereas all my logged would go to that function, and then the Log function would append the "[Admin]" string to the original logging string and also add logging-time and other items I needed etc (similar to WebLogic or any other JAVA logging), this ensure that all logs would follow one standard.
... Here is the million dollar nuclear weapon:
Now, go into your Debugger window, at the bottom you will see a "search" window whereas it says "Filter", enter your string (example; "[Admin]"), press ENTER, and job done.
This way, you will only see the lines you like to see,
Also useful when you like to filter out other components, example, for network communication I use more strings that I can filter later on.
In other words, the annoying lines of logs that has nothing to do with your app will always be there, but you choose not to see them using the above command which I hope solves the issue.
Hopefully a good day to all.
H

This solution has been working for me:
Run the app in the simulator
Open the system log (⌘ + /)
This will dump out all of the debug data and also your NSLogs.
To filter just your NSLog statements:
Prefix each with a symbol, for example: NSLog(#"^ Test Log")
Filter the results using the search box on the top right, "^" in the case above
This is what you should get:

Related

Swift: using NSRunningApplication

I'm trying to activate a running application, but can't seem to get the app.activateWithOptions call correct as each of the four attempts below result in compile time errors.
import AppKit
var ws = NSWorkspace.sharedWorkspace()
var apps = ws.runningApplications
var app :NSRunningApplication
for app in apps {
if (app.activationPolicy == NSApplicationActivationPolicy.Regular) {
app.activateWithOptions(options: ActivateIgnoringOtherApps)
app.activateWithOptions(options: NSApplicationActivateIgnoringOtherApps)
app.activateWithOptions(options: NSRunningApplication.ActivateIgnoringOtherApps)
app.activateWithOptions(options: NSRunningApplication.NSApplicationActivateIgnoringOtherApps)
println(app.localizedName)
}
}
The declaration var app : NSRunningApplication is pointless, because the app in the for...in line declares a different app. You should just delete that line, as it is misleading you and is utterly pointless and confusing.
Thus, it is that app, the one in the for...in that you need to specify the type of. You won't get any further in this attempt to compile until you fix that:
for app in apps as [NSRunningApplication] {
Now you can begin to fix your errors, one by one. I'll just give you a hint for the first one; it should be:
app.activateWithOptions(.ActivateIgnoringOtherApps)
The remaining three lines are left as an exercise for the reader! Even then you will still be in some trouble, though, because you are not combining the options - instead, you are activating the same app four separate times, which is silly and not at all what you want. You should combine the options first and then active the app once.
On the whole, it looks from your code as if you do not know Swift at all. You really should stop and learn it before you try to use it. Otherwise you'll just be flailing (as you clearly are now).
Thanks for the answer Matt. I'll add that if you are looking for just your application (rather than you can add a line before activating it. The following two lines will assign your application to the constant "app" and then activates it. This is useful for when yo want to get an NSAlert on the screen from an app running in the background.
let app = NSRunningApplication.currentApplication()
app.activateWithOptions(.ActivateIgnoringOtherApps)
let user_choice = alert.runModal()

how to write to output window in Visual Studio 2013?

I am able to compile and run my visual c++ program. It is not a console application. I am modifying an existing MFC application. I am trying to troubleshoot my program. I cannot run in debug and get the traces I need because my program also reads the mouse cursor. I also tried to use a messagebox to output this string, but again, a message box interferes with the mouse.
In the output window, I right-click and make sure that Program Output is enabled.
cout << "something" << endl;
However, in the output window, I do not see anything.
Looking at SO posts, I tried
std::string stro = "something ";
OutputDebugString(stro);
error C2664: 'void OutputDebugStringW(LPCWSTR)' : cannot convert
argument 1 from 'std::string' to 'LPCWSTR'
So changed to std::wstring stro
and
OutputDebugString(stro.c_str());
to append an int, I had to
std::wostringstream wso;
wso << i;
stro = stro + wso.str();
OutputDebugString(stro.c_str());
But I cannot see output in the window when not running in DEBUG. Is there any way to see output in non-DEBUG? This is surprisingly frustrating.
In the comments, writing a separate overloaded class was suggested; this seems like overkill. Java has System.out.println even in GUI programs. Android has Log.v(). Such a pity the equivalent is not available here.
Console applications have console output, which is what cout sends to. Since this isn't a console application, you'll want to try another approach. It's strange you object "I don't see Debug output when I'm not in Debug", that's the point of it - don't use OutputDebugString and expect it to be for something other than debug output.
I think the best way to understand what your application is doing, without interacting with it under the debugger (I know the frustration of trying to debug event handlers that keep being re-triggered by your debugging activities) is to try tracepoints. I blogged about them back in 2006 and they're still a great technique. Reading the mouse cursor won't interfere with tracepoints, and you can turn them on and off without rebuilding your app or making any code changes.
Set up a breakpoint, then right-click the red dot and choose When Hit. You can adjust the default message that goes into the output window to show you any values you are interested in - you can even call functions in the trace message, as you can see in the blog entry where I call the size() method of a collection. If necessary, you can even debug a release build.
Actually OutputDebugStrng should work in release builds - as long as you're running the app from the debugger. However cout cannot route output to the VS output pane.
If you already have a lot of 'cout' style debugging code, the easiest route might be to replace it with a custom ostream overload which does print to the output pane. Here's one, here's another.
If you can rewrite the debugging code completely, some macro wrappers around OutputDebugString might be better for you.
Having a stdout/stderr console window for debugging is very useful, and I always miss it when working on WinMain based apps. I use this code fragment to create a console for windows apps (and chain it in with existing loggers, etc.) After running this, your cout/cerr should work fine. OutputDebugString output can be seen outside of DevStudio using the DebugView app, but I prefer this:
#include <io.h>
#include <fcntl.h>
...
// DOS box console for stdin/stdout/stderr
void makeConsole()
{
AllocConsole();
HANDLE handle_out = GetStdHandle(STD_OUTPUT_HANDLE);
int hCrt = _open_osfhandle((long)handle_out, _O_TEXT);
FILE* hf_out = _fdopen(hCrt, "w");
setvbuf(hf_out, NULL, _IONBF, 2);
*stdout = *hf_out;
HANDLE handle_in = GetStdHandle(STD_INPUT_HANDLE);
hCrt = _open_osfhandle((long)handle_in, _O_TEXT);
FILE* hf_in = _fdopen(hCrt, "r");
setvbuf(hf_in, NULL, _IONBF, 2);
*stdin = *hf_in;
HANDLE handle_err = GetStdHandle(STD_ERROR_HANDLE);
hCrt = _open_osfhandle((long)handle_err, _O_TEXT);
FILE* hf_err = _fdopen(hCrt, "w");
setvbuf(hf_err, NULL, _IONBF, 2);
*stderr = *hf_err;
ios::sync_with_stdio();
}

How to disable the "local declaration of 'foo' hides instance variable" warning?

I understand what the warning says. This is exactly how scoping rules work. I appreciate that some people want a nanny. I don't. How can I disable this warning?
You can't disable this warning in the current version of XCode. There is no build setting for it, nor does it have a warning ID which you could use to pass a flag to the compiler to tell it to quit whining.
For future reference, you can find that warning ID by going to the Log Navigator, clicking on the most recent build where the warning appeared, drilling down into the log to find the point where the 'compile' task shows up with an exclamation mark and click the 'more detail' button, which looks like a gray callout with 5 horizontal lines. You'll see the warnings/errors listed in detail, and if there was a warning ID it would appear on the line detailing the warning in yellow. At least, thats what I've been told by one of the Compiler Engineers at Apple. I have yet to see a warning that I actually want to disable have an ID appear in the log.
File a bugreport with Apple, tell them you want more options to disable warnings in XCode and list this one specifically.
In Xcode 4.4 this can be disabled by going to the build setting "Other Warning Flags" and setting the value "-Wno-shadow-ivar"
The warning name you're looking for is "shadow-ivar". It appears in the log in Xcode 4.3 as something like warning: local declaration of 'foo' hides instance variable [-Wshadow-ivar].
I still don't see it in the project's Build Settings list, but #pragma clang diagnostic ignored "-Wshadow-ivar" will take care of it in one file at a time. I actually like to turn it off for particular functions and then turn it on again. That way it ceases to nag me where I've decided I don't care but will warn me again in new code.
NSString *foo;
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wshadow-ivar"
- (void)myFunctionWithShadow_ivarWarningsIgnored {
NSString *foo = #"...";
NSLog(#"This shouldn't get a warning %#", foo);
}
#pragma clang diagnostic pop
- (void)myFunctionWithShadow_ivarWarningsNotIgnored {
NSString *foo = #"...";
NSLog(#"and this should %#", foo);
}
Good Luck! :)
Update for Xcode 8.3 - Bug in compiler yields "Declaration shadows a local variable" in some instances when it intentional... and the nanny panics.
For example in Objective C:
Given
typedef BOOL ( ^BoolBoolBlock ) ( BOOL );
And the nature of Apple Blocks will make any variable declared in the immediate outer scope to a Block, a global to the block (pseudo globals). This results in the warnings (and errors if warnings == errors in your settings) on the line BOOL theResult = false;:
- (BoolBoolBlock) boolBoolBlock {
BoolBoolBlock theResult = nil;
theResult = ^BOOL ( unused BOOL is ) {
BOOL theResult = false; // hides (shadows) the outer theResult (a good thing)
/*
call back code goes here,
all variables local in scope to this method are global to the block so be careful
*/
return theResult;
};
return theResult;
}
The nanny sees BoolBoolBlock theResult = nil; getting shadow-blocked by BOOL theResult = false; which is actually intentional for two reasons in this case:
by convention in my code ALL return values are theResult no matter what
is a positive side effects because I am morally opposed to globals.
In other words this entire construct is setup to block the pseudo global mechanisms of Apple Blocks and put structure on that chaos. Blocking the method's "theResult" from use in the block that the method returns is a good thing ... yet the nanny has a hissy fit.
To calm the nanny down (get rid of warnings or possibly errors if you have the discipline to set warnings as errors), you simply make this setting change in your Project File -> Build Settings -> filter on "Other" -> Hidden Local Variables -> change to "No" ... or visually:

What is your favorite R debugging trick? [duplicate]

I get an error when using an R function that I wrote:
Warning messages:
1: glm.fit: algorithm did not converge
2: glm.fit: algorithm did not converge
What I have done:
Step through the function
Adding print to find out at what line the error occurs suggests two functions that should not use glm.fit. They are window() and save().
My general approaches include adding print and stop commands, and stepping through a function line by line until I can locate the exception.
However, it is not clear to me using those techniques where this error comes from in the code. I am not even certain which functions within the code depend on glm.fit. How do I go about diagnosing this problem?
I'd say that debugging is an art form, so there's no clear silver bullet. There are good strategies for debugging in any language, and they apply here too (e.g. read this nice article). For instance, the first thing is to reproduce the problem...if you can't do that, then you need to get more information (e.g. with logging). Once you can reproduce it, you need to reduce it down to the source.
Rather than a "trick", I would say that I have a favorite debugging routine:
When an error occurs, the first thing that I usually do is look at the stack trace by calling traceback(): that shows you where the error occurred, which is especially useful if you have several nested functions.
Next I will set options(error=recover); this immediately switches into browser mode where the error occurs, so you can browse the workspace from there.
If I still don't have enough information, I usually use the debug() function and step through the script line by line.
The best new trick in R 2.10 (when working with script files) is to use the findLineNum() and setBreakpoint() functions.
As a final comment: depending upon the error, it is also very helpful to set try() or tryCatch() statements around external function calls (especially when dealing with S4 classes). That will sometimes provide even more information, and it also gives you more control over how errors are handled at run time.
These related questions have a lot of suggestions:
Debugging tools for the R language
Debugging lapply/sapply calls
Getting the state of variables after an error occurs in R
R script line numbers at error?
The best walkthrough I've seen so far is:
http://www.biostat.jhsph.edu/%7Erpeng/docs/R-debug-tools.pdf
Anybody agree/disagree?
As was pointed out to me in another question, Rprof() and summaryRprof() are nice tools to find slow parts of your program that might benefit from speeding up or moving to a C/C++ implementation. This probably applies more if you're doing simulation work or other compute- or data-intensive activities. The profr package can help visualizing the results.
I'm on a bit of a learn-about-debugging kick, so another suggestion from another thread:
Set options(warn=2) to treat warnings like errors
You can also use options to drop you right into the heat of the action when an error or warning occurs, using your favorite debugging function of choice. For instance:
Set options(error=recover) to run recover() when an error occurs, as Shane noted (and as is documented in the R debugging guide. Or any other handy function you would find useful to have run.
And another two methods from one of #Shane's links:
Wrap an inner function call with try() to return more information on it.
For *apply functions, use .inform=TRUE (from the plyr package) as an option to the apply command
#JoshuaUlrich also pointed out a neat way of using the conditional abilities of the classic browser() command to turn on/off debugging:
Put inside the function you might want to debug browser(expr=isTRUE(getOption("myDebug")))
And set the global option by options(myDebug=TRUE)
You could even wrap the browser call: myBrowse <- browser(expr=isTRUE(getOption("myDebug"))) and then call with myBrowse() since it uses globals.
Then there are the new functions available in R 2.10:
findLineNum() takes a source file name and line number and returns the function and environment. This seems to be helpful when you source() a .R file and it returns an error at line #n, but you need to know what function is located at line #n.
setBreakpoint() takes a source file name and line number and sets a breakpoint there
The codetools package, and particularly its checkUsage function can be particularly helpful in quickly picking up syntax and stylistic errors that a compiler would typically report (unused locals, undefined global functions and variables, partial argument matching, and so forth).
setBreakpoint() is a more user-friendly front-end to trace(). Details on the internals of how this works are available in a recent R Journal article.
If you are trying to debug someone else's package, once you have located the problem you can over-write their functions with fixInNamespace and assignInNamespace, but do not use this in production code.
None of this should preclude the tried-and-true standard R debugging tools, some of which are above and others of which are not. In particular, the post-mortem debugging tools are handy when you have a time-consuming bunch of code that you'd rather not re-run.
Finally, for tricky problems which don't seem to throw an error message, you can use options(error=dump.frames) as detailed in this question:
Error without an error being thrown
At some point, glm.fit is being called. That means one of the functions you call or one of the functions called by those functions is using either glm, glm.fit.
Also, as I mention in my comment above, that is a warning not an error, which makes a big difference. You can't trigger any of R's debugging tools from a warning (with default options before someone tells me I am wrong ;-).
If we change the options to turn warnings into errors then we can start to use R's debugging tools. From ?options we have:
‘warn’: sets the handling of warning messages. If ‘warn’ is
negative all warnings are ignored. If ‘warn’ is zero (the
default) warnings are stored until the top-level function
returns. If fewer than 10 warnings were signalled they will
be printed otherwise a message saying how many (max 50) were
signalled. An object called ‘last.warning’ is created and
can be printed through the function ‘warnings’. If ‘warn’ is
one, warnings are printed as they occur. If ‘warn’ is two or
larger all warnings are turned into errors.
So if you run
options(warn = 2)
then run your code, R will throw an error. At which point, you could run
traceback()
to see the call stack. Here is an example.
> options(warn = 2)
> foo <- function(x) bar(x + 2)
> bar <- function(y) warning("don't want to use 'y'!")
> foo(1)
Error in bar(x + 2) : (converted from warning) don't want to use 'y'!
> traceback()
7: doWithOneRestart(return(expr), restart)
6: withOneRestart(expr, restarts[[1L]])
5: withRestarts({
.Internal(.signalCondition(simpleWarning(msg, call), msg,
call))
.Internal(.dfltWarn(msg, call))
}, muffleWarning = function() NULL)
4: .signalSimpleWarning("don't want to use 'y'!", quote(bar(x +
2)))
3: warning("don't want to use 'y'!")
2: bar(x + 2)
1: foo(1)
Here you can ignore the frames marked 4: and higher. We see that foo called bar and that bar generated the warning. That should show you which functions were calling glm.fit.
If you now want to debug this, we can turn to another option to tell R to enter the debugger when it encounters an error, and as we have made warnings errors we will get a debugger when the original warning is triggered. For that you should run:
options(error = recover)
Here is an example:
> options(error = recover)
> foo(1)
Error in bar(x + 2) : (converted from warning) don't want to use 'y'!
Enter a frame number, or 0 to exit
1: foo(1)
2: bar(x + 2)
3: warning("don't want to use 'y'!")
4: .signalSimpleWarning("don't want to use 'y'!", quote(bar(x + 2)))
5: withRestarts({
6: withOneRestart(expr, restarts[[1]])
7: doWithOneRestart(return(expr), restart)
Selection:
You can then step into any of those frames to see what was happening when the warning was thrown.
To reset the above options to their default, enter
options(error = NULL, warn = 0)
As for the specific warning you quote, it is highly likely that you need to allow more iterations in the code. Once you've found out what is calling glm.fit, work out how to pass it the control argument using glm.control - see ?glm.control.
So browser(), traceback() and debug() walk into a bar, but trace() waits outside and keeps the motor running.
By inserting browser somewhere in your function, the execution will halt and wait for your input. You can move forward using n (or Enter), run the entire chunk (iteration) with c, finish the current loop/function with f, or quit with Q; see ?browser.
With debug, you get the same effect as with browser, but this stops the execution of a function at its beginning. Same shortcuts apply. This function will be in a "debug" mode until you turn it off using undebug (that is, after debug(foo), running the function foo will enter "debug" mode every time until you run undebug(foo)).
A more transient alternative is debugonce, which will remove the "debug" mode from the function after the next time it's evaluated.
traceback will give you the flow of execution of functions all the way up to where something went wrong (an actual error).
You can insert code bits (i.e. custom functions) in functions using trace, for example browser. This is useful for functions from packages and you're too lazy to get the nicely folded source code.
My general strategy looks like:
Run traceback() to see look for obvious issues
Set options(warn=2) to treat warnings like errors
Set options(error=recover) to step into the call stack on error
After going through all the steps suggested here I just learned that setting .verbose = TRUE in foreach() also gives me tons of useful information. In particular foreach(.verbose=TRUE) shows exactly where an error occurs inside the foreach loop, while traceback() does not look inside the foreach loop.
Mark Bravington's debugger which is available as the package debug on CRAN is very good and pretty straight forward.
library(debug);
mtrace(myfunction);
myfunction(a,b);
#... debugging, can query objects, step, skip, run, breakpoints etc..
qqq(); # quit the debugger only
mtrace.off(); # turn off debugging
The code pops up in a highlighted Tk window so you can see what's going on and, of course you can call another mtrace() while in a different function.
HTH
I like Gavin's answer: I did not know about options(error = recover). I also like to use the 'debug' package that gives a visual way to step through your code.
require(debug)
mtrace(foo)
foo(1)
At this point it opens up a separate debug window showing your function, with a yellow line showing where you are in the code. In the main window the code enters debug mode, and you can keep hitting enter to step through the code (and there are other commands as well), and examine variable values, etc. The yellow line in the debug window keeps moving to show where you are in the code. When done with debugging, you can turn off tracing with:
mtrace.off()
Based on the answer I received here, you should definitely check out the options(error=recover) setting. When this is set, upon encountering an error, you'll see text on the console similar to the following (traceback output):
> source(<my filename>)
Error in plot.window(...) : need finite 'xlim' values
In addition: Warning messages:
1: In xy.coords(x, y, xlabel, ylabel, log) : NAs introduced by coercion
2: In min(x) : no non-missing arguments to min; returning Inf
3: In max(x) : no non-missing arguments to max; returning -Inf
Enter a frame number, or 0 to exit
1: source(<my filename>)
2: eval.with.vis(ei, envir)
3: eval.with.vis(expr, envir, enclos)
4: LinearParamSearch(data = dataset, y = data.frame(LGD = dataset$LGD10), data.names = data
5: LinearParamSearch.R#66: plot(x = x, y = y.data, xlab = names(y), ylab = data.names[i])
6: LinearParamSearch.R#66: plot.default(x = x, y = y.data, xlab = names(y), ylab = data.nam
7: LinearParamSearch.R#66: localWindow(xlim, ylim, log, asp, ...)
8: LinearParamSearch.R#66: plot.window(...)
Selection:
At which point you can choose which "frame" to enter. When you make a selection, you'll be placed into browser() mode:
Selection: 4
Called from: stop(gettextf("replacement has %d rows, data has %d", N, n),
domain = NA)
Browse[1]>
And you can examine the environment as it was at the time of the error. When you're done, type c to bring you back to the frame selection menu. When you're done, as it tells you, type 0 to exit.
I gave this answer to a more recent question, but am adding it here for completeness.
Personally I tend not to use functions to debug. I often find that this causes as much trouble as it solves. Also, coming from a Matlab background I like being able to do this in an integrated development environment (IDE) rather than doing this in the code. Using an IDE keeps your code clean and simple.
For R, I use an IDE called "RStudio" (http://www.rstudio.com), which is available for windows, mac, and linux and is pretty easy to use.
Versions of Rstudio since about October 2013 (0.98ish?) have the capability to add breakpoints in scripts and functions: to do this, just click on the left margin of the file to add a breakpoint. You can set a breakpoint and then step through from that point on. You also have access to all of the data in that environment, so you can try out commands.
See http://www.rstudio.com/ide/docs/debugging/overview for details. If you already have Rstudio installed, you may need to upgrade - this is a relatively new (late 2013) feature.
You may also find other IDEs that have similar functionality.
Admittedly, if it's a built-in function you may have to resort to some of the suggestions made by other people in this discussion. But, if it's your own code that needs fixing, an IDE-based solution might be just what you need.
To debug Reference Class methods without instance reference
ClassName$trace(methodName, browser)
I am beginning to think that not printing error line number - a most basic requirement - BY DEFAILT- is some kind of a joke in R/Rstudio. The only reliable method I have found to find where an error occurred is to make the additional effort of calloing traceback() and see the top line.

Debugging tools for the R language [duplicate]

I get an error when using an R function that I wrote:
Warning messages:
1: glm.fit: algorithm did not converge
2: glm.fit: algorithm did not converge
What I have done:
Step through the function
Adding print to find out at what line the error occurs suggests two functions that should not use glm.fit. They are window() and save().
My general approaches include adding print and stop commands, and stepping through a function line by line until I can locate the exception.
However, it is not clear to me using those techniques where this error comes from in the code. I am not even certain which functions within the code depend on glm.fit. How do I go about diagnosing this problem?
I'd say that debugging is an art form, so there's no clear silver bullet. There are good strategies for debugging in any language, and they apply here too (e.g. read this nice article). For instance, the first thing is to reproduce the problem...if you can't do that, then you need to get more information (e.g. with logging). Once you can reproduce it, you need to reduce it down to the source.
Rather than a "trick", I would say that I have a favorite debugging routine:
When an error occurs, the first thing that I usually do is look at the stack trace by calling traceback(): that shows you where the error occurred, which is especially useful if you have several nested functions.
Next I will set options(error=recover); this immediately switches into browser mode where the error occurs, so you can browse the workspace from there.
If I still don't have enough information, I usually use the debug() function and step through the script line by line.
The best new trick in R 2.10 (when working with script files) is to use the findLineNum() and setBreakpoint() functions.
As a final comment: depending upon the error, it is also very helpful to set try() or tryCatch() statements around external function calls (especially when dealing with S4 classes). That will sometimes provide even more information, and it also gives you more control over how errors are handled at run time.
These related questions have a lot of suggestions:
Debugging tools for the R language
Debugging lapply/sapply calls
Getting the state of variables after an error occurs in R
R script line numbers at error?
The best walkthrough I've seen so far is:
http://www.biostat.jhsph.edu/%7Erpeng/docs/R-debug-tools.pdf
Anybody agree/disagree?
As was pointed out to me in another question, Rprof() and summaryRprof() are nice tools to find slow parts of your program that might benefit from speeding up or moving to a C/C++ implementation. This probably applies more if you're doing simulation work or other compute- or data-intensive activities. The profr package can help visualizing the results.
I'm on a bit of a learn-about-debugging kick, so another suggestion from another thread:
Set options(warn=2) to treat warnings like errors
You can also use options to drop you right into the heat of the action when an error or warning occurs, using your favorite debugging function of choice. For instance:
Set options(error=recover) to run recover() when an error occurs, as Shane noted (and as is documented in the R debugging guide. Or any other handy function you would find useful to have run.
And another two methods from one of #Shane's links:
Wrap an inner function call with try() to return more information on it.
For *apply functions, use .inform=TRUE (from the plyr package) as an option to the apply command
#JoshuaUlrich also pointed out a neat way of using the conditional abilities of the classic browser() command to turn on/off debugging:
Put inside the function you might want to debug browser(expr=isTRUE(getOption("myDebug")))
And set the global option by options(myDebug=TRUE)
You could even wrap the browser call: myBrowse <- browser(expr=isTRUE(getOption("myDebug"))) and then call with myBrowse() since it uses globals.
Then there are the new functions available in R 2.10:
findLineNum() takes a source file name and line number and returns the function and environment. This seems to be helpful when you source() a .R file and it returns an error at line #n, but you need to know what function is located at line #n.
setBreakpoint() takes a source file name and line number and sets a breakpoint there
The codetools package, and particularly its checkUsage function can be particularly helpful in quickly picking up syntax and stylistic errors that a compiler would typically report (unused locals, undefined global functions and variables, partial argument matching, and so forth).
setBreakpoint() is a more user-friendly front-end to trace(). Details on the internals of how this works are available in a recent R Journal article.
If you are trying to debug someone else's package, once you have located the problem you can over-write their functions with fixInNamespace and assignInNamespace, but do not use this in production code.
None of this should preclude the tried-and-true standard R debugging tools, some of which are above and others of which are not. In particular, the post-mortem debugging tools are handy when you have a time-consuming bunch of code that you'd rather not re-run.
Finally, for tricky problems which don't seem to throw an error message, you can use options(error=dump.frames) as detailed in this question:
Error without an error being thrown
At some point, glm.fit is being called. That means one of the functions you call or one of the functions called by those functions is using either glm, glm.fit.
Also, as I mention in my comment above, that is a warning not an error, which makes a big difference. You can't trigger any of R's debugging tools from a warning (with default options before someone tells me I am wrong ;-).
If we change the options to turn warnings into errors then we can start to use R's debugging tools. From ?options we have:
‘warn’: sets the handling of warning messages. If ‘warn’ is
negative all warnings are ignored. If ‘warn’ is zero (the
default) warnings are stored until the top-level function
returns. If fewer than 10 warnings were signalled they will
be printed otherwise a message saying how many (max 50) were
signalled. An object called ‘last.warning’ is created and
can be printed through the function ‘warnings’. If ‘warn’ is
one, warnings are printed as they occur. If ‘warn’ is two or
larger all warnings are turned into errors.
So if you run
options(warn = 2)
then run your code, R will throw an error. At which point, you could run
traceback()
to see the call stack. Here is an example.
> options(warn = 2)
> foo <- function(x) bar(x + 2)
> bar <- function(y) warning("don't want to use 'y'!")
> foo(1)
Error in bar(x + 2) : (converted from warning) don't want to use 'y'!
> traceback()
7: doWithOneRestart(return(expr), restart)
6: withOneRestart(expr, restarts[[1L]])
5: withRestarts({
.Internal(.signalCondition(simpleWarning(msg, call), msg,
call))
.Internal(.dfltWarn(msg, call))
}, muffleWarning = function() NULL)
4: .signalSimpleWarning("don't want to use 'y'!", quote(bar(x +
2)))
3: warning("don't want to use 'y'!")
2: bar(x + 2)
1: foo(1)
Here you can ignore the frames marked 4: and higher. We see that foo called bar and that bar generated the warning. That should show you which functions were calling glm.fit.
If you now want to debug this, we can turn to another option to tell R to enter the debugger when it encounters an error, and as we have made warnings errors we will get a debugger when the original warning is triggered. For that you should run:
options(error = recover)
Here is an example:
> options(error = recover)
> foo(1)
Error in bar(x + 2) : (converted from warning) don't want to use 'y'!
Enter a frame number, or 0 to exit
1: foo(1)
2: bar(x + 2)
3: warning("don't want to use 'y'!")
4: .signalSimpleWarning("don't want to use 'y'!", quote(bar(x + 2)))
5: withRestarts({
6: withOneRestart(expr, restarts[[1]])
7: doWithOneRestart(return(expr), restart)
Selection:
You can then step into any of those frames to see what was happening when the warning was thrown.
To reset the above options to their default, enter
options(error = NULL, warn = 0)
As for the specific warning you quote, it is highly likely that you need to allow more iterations in the code. Once you've found out what is calling glm.fit, work out how to pass it the control argument using glm.control - see ?glm.control.
So browser(), traceback() and debug() walk into a bar, but trace() waits outside and keeps the motor running.
By inserting browser somewhere in your function, the execution will halt and wait for your input. You can move forward using n (or Enter), run the entire chunk (iteration) with c, finish the current loop/function with f, or quit with Q; see ?browser.
With debug, you get the same effect as with browser, but this stops the execution of a function at its beginning. Same shortcuts apply. This function will be in a "debug" mode until you turn it off using undebug (that is, after debug(foo), running the function foo will enter "debug" mode every time until you run undebug(foo)).
A more transient alternative is debugonce, which will remove the "debug" mode from the function after the next time it's evaluated.
traceback will give you the flow of execution of functions all the way up to where something went wrong (an actual error).
You can insert code bits (i.e. custom functions) in functions using trace, for example browser. This is useful for functions from packages and you're too lazy to get the nicely folded source code.
My general strategy looks like:
Run traceback() to see look for obvious issues
Set options(warn=2) to treat warnings like errors
Set options(error=recover) to step into the call stack on error
After going through all the steps suggested here I just learned that setting .verbose = TRUE in foreach() also gives me tons of useful information. In particular foreach(.verbose=TRUE) shows exactly where an error occurs inside the foreach loop, while traceback() does not look inside the foreach loop.
Mark Bravington's debugger which is available as the package debug on CRAN is very good and pretty straight forward.
library(debug);
mtrace(myfunction);
myfunction(a,b);
#... debugging, can query objects, step, skip, run, breakpoints etc..
qqq(); # quit the debugger only
mtrace.off(); # turn off debugging
The code pops up in a highlighted Tk window so you can see what's going on and, of course you can call another mtrace() while in a different function.
HTH
I like Gavin's answer: I did not know about options(error = recover). I also like to use the 'debug' package that gives a visual way to step through your code.
require(debug)
mtrace(foo)
foo(1)
At this point it opens up a separate debug window showing your function, with a yellow line showing where you are in the code. In the main window the code enters debug mode, and you can keep hitting enter to step through the code (and there are other commands as well), and examine variable values, etc. The yellow line in the debug window keeps moving to show where you are in the code. When done with debugging, you can turn off tracing with:
mtrace.off()
Based on the answer I received here, you should definitely check out the options(error=recover) setting. When this is set, upon encountering an error, you'll see text on the console similar to the following (traceback output):
> source(<my filename>)
Error in plot.window(...) : need finite 'xlim' values
In addition: Warning messages:
1: In xy.coords(x, y, xlabel, ylabel, log) : NAs introduced by coercion
2: In min(x) : no non-missing arguments to min; returning Inf
3: In max(x) : no non-missing arguments to max; returning -Inf
Enter a frame number, or 0 to exit
1: source(<my filename>)
2: eval.with.vis(ei, envir)
3: eval.with.vis(expr, envir, enclos)
4: LinearParamSearch(data = dataset, y = data.frame(LGD = dataset$LGD10), data.names = data
5: LinearParamSearch.R#66: plot(x = x, y = y.data, xlab = names(y), ylab = data.names[i])
6: LinearParamSearch.R#66: plot.default(x = x, y = y.data, xlab = names(y), ylab = data.nam
7: LinearParamSearch.R#66: localWindow(xlim, ylim, log, asp, ...)
8: LinearParamSearch.R#66: plot.window(...)
Selection:
At which point you can choose which "frame" to enter. When you make a selection, you'll be placed into browser() mode:
Selection: 4
Called from: stop(gettextf("replacement has %d rows, data has %d", N, n),
domain = NA)
Browse[1]>
And you can examine the environment as it was at the time of the error. When you're done, type c to bring you back to the frame selection menu. When you're done, as it tells you, type 0 to exit.
I gave this answer to a more recent question, but am adding it here for completeness.
Personally I tend not to use functions to debug. I often find that this causes as much trouble as it solves. Also, coming from a Matlab background I like being able to do this in an integrated development environment (IDE) rather than doing this in the code. Using an IDE keeps your code clean and simple.
For R, I use an IDE called "RStudio" (http://www.rstudio.com), which is available for windows, mac, and linux and is pretty easy to use.
Versions of Rstudio since about October 2013 (0.98ish?) have the capability to add breakpoints in scripts and functions: to do this, just click on the left margin of the file to add a breakpoint. You can set a breakpoint and then step through from that point on. You also have access to all of the data in that environment, so you can try out commands.
See http://www.rstudio.com/ide/docs/debugging/overview for details. If you already have Rstudio installed, you may need to upgrade - this is a relatively new (late 2013) feature.
You may also find other IDEs that have similar functionality.
Admittedly, if it's a built-in function you may have to resort to some of the suggestions made by other people in this discussion. But, if it's your own code that needs fixing, an IDE-based solution might be just what you need.
To debug Reference Class methods without instance reference
ClassName$trace(methodName, browser)
I am beginning to think that not printing error line number - a most basic requirement - BY DEFAILT- is some kind of a joke in R/Rstudio. The only reliable method I have found to find where an error occurred is to make the additional effort of calloing traceback() and see the top line.

Resources