I am working on slides where I want to show the output generated by debugger() and exit. Kniting, gets stuck on the chunk as it is waiting for user input which, in my case, would be 0. Any way to force RStudio to knit in the interactive mode?
Try using the R package
subprocess to run some code
in a seperate R session. The following example has been built using code
snippets provided in the 'intro' vignette from the subprocess package and the
example provided in the help file for debugger.
A buy running the debugger code in a child process you can script the interaction needed within the R code chunks of your primary document.
Example .Rmd file:
---
title: 'Debugger in a knited doc'
output: html_document
---
```{r label='internal-setup', include=FALSE}
knitr::opts_chunk$set(collapse = TRUE)
```
Try using the R package
[subprocess](https://cran.r-project.org/package=subprocess) to run some code
in a seperate R session. The following example has been built using code
snippets provided in the 'intro' vignette from the subprocess package and the
example provided in the help file for `debugger`.
The first thing we will do is build a function for calling the R binary.
```{r label="setup_subprocess"}
library(subprocess)
R_binary <- function () {
R_exe <- ifelse (tolower(.Platform$OS.type) == "windows", "R.exe", "R")
return(file.path(R.home("bin"), R_exe))
}
```
Spawning a child R process is done as follows:
```{r label="spawning"}
child_r <- subprocess::spawn_process(R_binary(), c("--vanilla"))
Sys.sleep(2) # allow sufficient time for the child R process to start
subprocess::process_read(child_r)$stdout
```
We will write some code as a character string and send it to the child
process.
```{r }
unlink("testdump.rda")
code <-
'
options(error = quote(dump.frames("testdump", TRUE)))
f <- function() {
g <- function() stop("test dump.frames")
g()
}
f() # will generate a dump on file "testdump.rda"
options(error = NULL)
'
invisible(subprocess::process_write(child_r, code))
subprocess::process_read(child_r)$stdout
```
Now, you can load the dump into the child process and start the debugger.
```{r }
code <-
'
load("testdump.rda")
debugger(testdump)
'
invisible(subprocess::process_write(child_r, code))
subprocess::process_read(child_r)$stdout
```
Say you want to start walking through the debugger for `f(1)`
```{r }
invisible(subprocess::process_write(child_r, "1\n"))
subprocess::process_read(child_r)$stdout
```
You can continue to walk through the debugging as needed via the child
process.
Don't forget to terminate the process
```{r }
subprocess::process_terminate(child_r)
```
This document resulted in an .html page that looked like this:
Debugger in a knited doc
Try using the R package subprocess to run some code in a seperate R session. The following example has been built using code snippets provided in the ‘intro’ vignette from the subprocess package and the example provided in the help file for debugger.
The first thing we will do is build a function for calling the R binary.
library(subprocess)
R_binary <- function () {
R_exe <- ifelse (tolower(.Platform$OS.type) == "windows", "R.exe", "R")
return(file.path(R.home("bin"), R_exe))
}
Spawning a child R process is done as follows:
child_r <- subprocess::spawn_process(R_binary(), c("--vanilla"))
Sys.sleep(2) # allow sufficient time for the child R process to start
subprocess::process_read(child_r)$stdout
## [1] ""
## [2] "R version 3.5.0 (2018-04-23) -- \"Joy in Playing\""
## [3] "Copyright (C) 2018 The R Foundation for Statistical Computing"
## [4] "Platform: x86_64-apple-darwin17.5.0 (64-bit)"
## [5] ""
## [6] "R is free software and comes with ABSOLUTELY NO WARRANTY."
## [7] "You are welcome to redistribute it under certain conditions."
## [8] "Type 'license()' or 'licence()' for distribution details."
## [9] ""
## [10] " Natural language support but running in an English locale"
## [11] ""
## [12] "R is a collaborative project with many contributors."
## [13] "Type 'contributors()' for more information and"
## [14] "'citation()' on how to cite R or R packages in publications."
## [15] ""
## [16] "Type 'demo()' for some demos, 'help()' for on-line help, or"
## [17] "'help.start()' for an HTML browser interface to help."
## [18] "Type 'q()' to quit R."
## [19] ""
## [20] "> "
We will write some code as a character string and send it to the child process.
unlink("testdump.rda")
code <-
'
options(error = quote(dump.frames("testdump", TRUE)))
f <- function() {
g <- function() stop("test dump.frames")
g()
}
f() # will generate a dump on file "testdump.rda"
options(error = NULL)
'
invisible(subprocess::process_write(child_r, code))
subprocess::process_read(child_r)$stdout
## [1] ""
## [2] "> options(error = quote(dump.frames(\"testdump\", TRUE)))"
## [3] "> "
## [4] "> f <- function() {"
## [5] "+ g <- function() stop(\"test dump.frames\")"
## [6] "+ g()"
## [7] "+ }"
## [8] "> f() # will generate a dump on file \"testdump.rda\""
Now, you can load the dump into the child process and start the debugger.
code <-
'
load("testdump.rda")
debugger(testdump)
'
invisible(subprocess::process_write(child_r, code))
subprocess::process_read(child_r)$stdout
## [1] "> options(error = NULL)"
## [2] "> "
## [3] "> load(\"testdump.rda\")"
## [4] "> debugger(testdump)"
## [5] "Message: Error in g() : test dump.frames"
## [6] "Calls: f -> g"
## [7] "Available environments had calls:"
## [8] "1: f()"
## [9] "2: g()"
## [10] "3: stop(\"test dump.frames\")"
## [11] ""
## [12] "Enter an environment number, or 0 to exit Selection: "
Say you want to start walking through the debugger for f(1)
invisible(subprocess::process_write(child_r, "1\n"))
subprocess::process_read(child_r)$stdout
## [1] "1"
## [2] "Browsing in the environment with call:"
## [3] " f()"
## [4] "Called from: debugger.look(ind)"
## [5] "Browse[1]> "
You can continue to walk through the debugging as needed via the child process.
Don’t forget to terminate the process
subprocess::process_terminate(child_r)
## [1] TRUE
You can get copies of the actual files from my github page.
Related
I'm trying to use the Delve (dlv) "display" command to show the values of a slice and a map. The "print" command shows the full value but "display" only ever shows "[...]"
contrast the display and print output below
(dlv) display
0: gns = []string len: 2, cap: 2, [...]
1: chGnMap = map[string]int [...]
(dlv) p gns
[]string len: 2, cap: 2, ["ecam","site"]
(dlv) p chGnMap
map[string]int [
"ecam": 2,
"site": 2,
]
(dlv) config -list
aliases map[]
substitute-path []
max-string-len 1024
max-array-values 1024
max-variable-recurse 10
disassemble-flavor <not defined>
show-location-expr false
source-list-line-color <nil>
source-list-arrow-color ""
source-list-keyword-color ""
source-list-string-color ""
source-list-number-color ""
source-list-comment-color ""
source-list-line-count <not defined>
debug-info-directories [/usr/lib/debug/.build-id]
(dlv) exit
# dlv version
Delve Debugger
Version: 1.7.2
This doesn't entirely answer your question, but:
When you are adding your display variables display -a ..., you can reference a key in the dictionary.
See steps below:
Add map w/ key supplied using display -a
Show that the key currently doesn't exist
The key is automatically added when the program advances
Note: I needed to append [0] to the display line because
handlerHeader["Content-Type"] returns a string slice.
(dlv) args
handler = (*main.ProduceHandler)(0x14000112d10)
wri = net/http.ResponseWriter(*net/http.response) 0x14000193708
req = ("*net/http.Request")(0x14000182000)
(dlv) display -a wri.w.wr.res.handlerHeader["Content-Type"][0]
0: wri.w.wr.res.handlerHeader["Content-Type"][0] = error key not found
(dlv) print %T wri.w.wr.res.handlerHeader
net/http.Header []
(dlv) n
> main.(*ProduceHandler).ServeHTTP() ./api.go:144 (PC: 0x100984480)
139: switch req.Method {
140: case http.MethodGet:
141: if len(req.URL.Query()["code"]) == 0 {
142: log.Println("Sending entire produce database")
143: wri.Header().Add("Content-Type", "application/json")
=> 144: wri.WriteHeader(http.StatusOK)
145: json.NewEncoder(wri).Encode(handler.DB)
146: return
147: }
148:
149: c := req.URL.Query()["code"][0]
0: wri.w.wr.res.handlerHeader["Content-Type"][0] = "application/json"
I have used this website over a hundred times and it has helped me so much with my coding (in python, arduino, terminal commands and Window's prompt). I thought I would put up some knowledge that I found, for things that Stack overflow could not help me with but my be helpful for others in a similar situation. So have a look at the code below. I hope if helps people with creating their own backup code. I am most proud with the "while '\r\n' in output" part of the below code. :
output = child0.readline()
while '\r\n' in output:
msg.log(output.replace('\r\n', ''), logMode + 's')
output = child0.readline()
This helps find the EOF when the program has finished running. Hence you can output the terminal program's output as the program is running.
I will be adding a Windows version to this code too. Possibly with robocopy.
Any questions with the below code, please do not hesitate to ask. NB: I changed people's names and removed my username and passwords.
#!/usr/bin/python
# Written by irishcream24, amateur coder
import subprocess
import sys
import os.path
import logAndError # my own library to handle errors and log events
from inspect import currentframe as CF # help with logging
from inspect import getframeinfo as GFI # help with logging
import threading
import fcntl
import pexpect
import time
import socket
import time as t
from sys import platform
if platform == "win32":
import msvcrt
portSearch = "Uno"
portResultPosition = 1
elif platform == "darwin":
portSearch = "usb"
portResultPosition = 0
else:
print 'Unknown operating system'
print 'Ending Program...'
sys.exit()
# Check if another instance of the program is running, if so, then stop second.
pid_file = 'program.pid'
fp = open(pid_file, 'w')
try:
fcntl.lockf(fp, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
# another instance is running
print "Program already running, stopping the second instance..."
sys.exit(1)
# Determine where main program files are stored
directory = os.path.dirname(os.path.realpath(__file__))
# To print stderr to both screen and file
errCounter = 0
exitFlag = [0]
class tee:
def __init__(self, _fd1, _fd2):
self.fd1 = _fd1
self.fd2 = _fd2
def __del__(self):
if self.fd1 != sys.stdout and self.fd1 != sys.stderr :
self.fd1.close()
if self.fd2 != sys.stdout and self.fd2 != sys.stderr :
self.fd2.close()
def write(self, text):
global errCounter
global exitFlag
if errCounter == 0:
self.fd1.write('%s: ' %t.strftime("%d/%m/%y %H:%M"))
self.fd2.write('%s: ' %t.strftime("%d/%m/%y %H:%M"))
errCounter = 1
exitFlag[0] = 1
self.fd1.write(text)
self.fd2.write(text)
def flush(self):
self.fd1.flush()
self.fd2.flush()
# Error and log handling
errMode = 'pf' # p = print to screen, f = print to file, e = end program
errorFileAddress = '%s/errorFile.txt' %directory
outputlog = open(errorFileAddress, "a")
sys.stderr = tee(sys.stderr, outputlog)
logFileAddress = '%s/log.txt' %directory
logMode = 'pf' # p = print to screen, f = print to file
msg = logAndError.logAndError(errorFileAddress, logFileAddress)
# Set computer to be backed up
sourceComputer = 'DebbieMac'
try:
sourceComputer = sys.argv[1]
except:
print 'No source argument given.'
if sourceComputer == 'SamMac' or sourceComputer == 'DebbieMac' or sourceComputer == 'mediaCentre' or sourceComputer == 'garageComputer':
pass
else:
msg.error('incorrect source computer supplied!', errMode, GFI(CF()).lineno, exitFlag)
sys.exit()
# Source and destination setup
backupRoute = 'network'
try:
backupRoute = sys.argv[2]
except:
print 'No back up route argument given.'
if backupRoute == 'network' or backupRoute == 'direct' or backupRoute == 'testNetwork' or backupRoute == 'testDirect':
pass
else:
msg.error('incorrect backup route supplied!', errMode, GFI(CF()).lineno, exitFlag)
sys.exit()
# Source, destination and exclude dictionaries
v = {
'SamMac network source' : '/Users/SamJones',
'SamMac network destination' : '/Volumes/Seagate/Sam_macbook_backup/Backups',
'SamMac direct source' : '/Users/SamJones',
'SamMac direct destination' : '/Volumes/Seagate\ Backup\ Plus\ Drive/Sam_macbook_backup/Backups',
'SamMac testNetwork source' : '/Users/SamJones/Documents/Arduino/arduino_sketches-master',
'SamMac testNetwork destination' : '/Volumes/Seagate/Sam_macbook_backup/Arduino',
'SamMac exclude' : ['.*', '.Trash', 'Library', 'Pictures'],
'DebbieMac network source' : '/Users/DebbieJones',
'DebbieMac network destination' : '/Volumes/Seagate/Debbie_macbook_backup/Backups',
'DebbieMac direct source' : '/Users/DebbieJones',
'DebbieMac direct destination' : '/Volumes/Seagate\ Backup\ Plus\ Drive/Debbie_macbook_backup/Backups',
'DebbieMac testNetwork source': '/Users/DebbieJones/testFolder',
'DebbieMac testNetwork destination' : '/Volumes/Seagate/Debbie_macbook_backup',
'DebbieMac testDirect source' : '/Users/DebbieJones/testFolder',
'DebbieMac testDirect destination' : '/Volumes/Seagate\ Backup\ Plus\ Drive/Debbie_macbook_backup',
'DebbieMac exclude' : ['.*', '.Trash', 'Library', 'Pictures']
}
# Main threading code
class mainThreadClass(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
PIDMessage = 'Starting backup PID: %s'%os.getpid()
msg.log(PIDMessage, logMode)
mainThread()
msg.log('Process completed successfully\n', logMode)
def mainThread():
if platform == "win32":
pass
elif platform == "darwin":
if 'network' in backupRoute:
# Connect to SeagateBackup
if os.path.ismount('/Volumes/Seagate') == False:
msg.log('Mounting Seagate Backup Hub', logMode)
commandM = 'mount volume'
smbPoint = '"smb://username:password#mediacentre/Seagate"'
childM = pexpect.spawn("%s '%s %s'" %('osascript -e', commandM, smbPoint), timeout = None)
childM.expect(pexpect.EOF)
else:
msg.log('Seagate already mounted', logMode)
# Use rsync to backup files
commandR = 'rsync -avb '
for s in v['%s exclude' %sourceComputer]:
commandR = commandR + "--exclude '%s' " %s
commandR = commandR + '--delete --backup-dir="../PreviousBackups/%s" ' %time.strftime("%d-%m-%y %H%M")
commandR = commandR + '%s %s' %(v['%s %s source' %(sourceComputer, backupRoute)], v['%s %s destination' %(sourceComputer, backupRoute)])
msg.log(commandR, logMode)
msg.log('Running rsync...rsync output below', logMode)
child0 = pexpect.spawn(commandR, timeout = None)
# Handling command output
# If no '\r\n' in readline() output, then EOF reached
output = child0.readline()
while '\r\n' in output:
msg.log(output.replace('\r\n', ''), logMode + 's')
output = child0.readline()
return
if __name__ == '__main__':
# Create new threads
threadMain = mainThreadClass()
# Start new Threads
threadMain.start()
logAndError.py
# to handle errors
import time
import sys
import threading
class logAndError:
def __init__(self, errorAddress, logAddress):
self.errorAddress = errorAddress
self.logAddress = logAddress
self.lock = threading.RLock()
def error(self, message, errMode, lineNumber=None, exitFlag=[0]):
message = '%s: %s' %(time.strftime("%d/%m/%y %H:%M"), message)
# p = print to screen, f = print to file, e = end program
if 'p' in errMode:
print message
if 'f' in errMode and 'e' not in errMode:
errorFile = open(self.errorAddress, 'a')
errorFile.write('%s\n' %message)
errorFile.close()
return
def log(self, logmsg, logMode):
with self.lock:
logmsg2 = '%s: %s' %(time.strftime("%d/%m/%y %H:%M"), logmsg)
if 'p' in logMode:
# s = simple (no date stamp)
if 's' in logMode:
print logmsg
else:
print logmsg2
if 'f' in logMode:
if 's' in logMode:
logFile = open(self.logAddress, 'a')
logFile.write('%s\n' %logmsg)
logFile.close()
else:
logFile = open(self.logAddress, 'a')
logFile.write('%s\n' %logmsg2)
logFile.close()
return
When LLDB triggers breakpoint X, is there a command that will disable or remove X and then continue?
That's an interesting idea. There's no built in command to do this in lldb but it would be easy to implement as a user-defined command written in Python. SBThread::GetStopReason() will be eStopReasonBreakpoint if that thread stopped because of a breakpoint. SBThread::GetStopReasonDataCount() will return 2 -- indicating that the breakpoint id and location id are available. SBThread::GetStopReasonDataAtIndex(0) will give you the breakpoint ID, SBThread::GetStopReasonDataAtIndex(1) will give you the location ID. (a single user-specified breakpoint may resolve to multiple locations. e.g. an inlined function, or a function name that occurs in multiple libraries in a single program.)
Here's a quick & dirty example of a python command that does this. I put this in ~/lldb where I save my lldb user-defined commands and then in my ~/.lldbinit file I have a line like command script import ~/lldb/disthis.py.
In use, it looks like this:
% lldb a.out
(lldb) target create "a.out"
Current executable set to 'a.out' (x86_64).
(lldb) br s -n main
Breakpoint 1: where = a.out`main + 15 at a.c:4, address = 0x0000000100000f4f
(lldb) r
Process 67487 launched: '/private/tmp/a.out' (x86_64)
Process 67487 stopped
* thread #1: tid = 0x290c51, 0x0000000100000f4f a.out`main + 15 at a.c:4, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
#0: 0x0000000100000f4f a.out`main + 15 at a.c:4
1 #include <stdio.h>
2 int main()
3 {
-> 4 puts ("HI");
5 puts ("HI");
6 }
(lldb) com scr imp ~/lldb/disthis.py
(lldb) disthis
Breakpoint 1.1 disabled.
(lldb) br li
Current breakpoints:
1: name = 'main', locations = 1
1.1: where = a.out`main + 15 at a.c:4, address = 0x0000000100000f4f, unresolved, hit count = 1 Options: disabled
(lldb)
Pretty straightforward.
# import this into lldb with a command like
# command script import disthis.py
import lldb
def disthis(debugger, command, *args):
"""Usage: disthis
Disables the breakpoint the currently selected thread is stopped at."""
target = None
thread = None
if len(args) == 2:
# Old lldb invocation style
result = args[0]
if debugger and debugger.GetSelectedTarget() and debugger.GetSelectedTarget().GetProcess():
target = debugger.GetSelectedTarget()
process = target.GetProcess()
thread = process.GetSelectedThread()
elif len(args) == 3:
# New (2015 & later) lldb invocation style where we're given the execution context
exe_ctx = args[0]
result = args[1]
target = exe_ctx.GetTarget()
thread = exe_ctx.GetThread()
else:
print "Unknown python function invocation from lldb."
return
if thread == None:
print >>result, "error: process is not paused, or has not been started yet."
result.SetStatus (lldb.eReturnStatusFailed)
return
if thread.GetStopReason() != lldb.eStopReasonBreakpoint:
print >>result, "error: not stopped at a breakpoint."
result.SetStatus (lldb.eReturnStatusFailed)
return
if thread.GetStopReasonDataCount() != 2:
print >>result, "error: Unexpected number of StopReasonData returned, expected 2, got %d" % thread.GetStopReasonDataCount()
result.SetStatus (lldb.eReturnStatusFailed)
return
break_num = thread.GetStopReasonDataAtIndex(0)
location_num = thread.GetStopReasonDataAtIndex(1)
if break_num == 0 or location_num == 0:
print >>result, "error: Got invalid breakpoint number or location number"
result.SetStatus (lldb.eReturnStatusFailed)
return
bkpt = target.FindBreakpointByID (break_num)
if location_num > bkpt.GetNumLocations():
print >>result, "error: Invalid location number"
result.SetStatus (lldb.eReturnStatusFailed)
return
bkpt_loc = bkpt.GetLocationAtIndex(location_num - 1)
if bkpt_loc.IsValid() != True:
print >>result, "error: Got invalid BreakpointLocation"
result.SetStatus (lldb.eReturnStatusFailed)
return
bkpt_loc.SetEnabled(False)
print >>result, "Breakpoint %d.%d disabled." % (break_num, location_num)
return
def __lldb_init_module (debugger, dict):
debugger.HandleCommand('command script add -f %s.disthis disthis' % __name__)
I'd like to better understand the execution duration of statements within an R script when run in batch mode. Is there a good way to do this?
I had one thought on how I'd love to see this done. When executing in batch made the source is echoed to the specified log file. Is there a way for it to echo a timestamp next to the source code in this log file?
> R CMD BATCH script.R script.Rout
Here is the output that I see today.
> tail -f script.Rout
...
> # features related to the date
> trandateN <- as.integer(trandate)
> dayOfWeek <- as.integer(wday(trandate))
> holiday <- mapply(isHoliday, trandate)
I'd like to see something like...
> tail -f script.Rout
...
2013-06-27 11:18:01 > # features related to the date
2013-06-27 11:18:01 > trandateN <- as.integer(trandate)
2013-06-27 11:18:05 > dayOfWeek <- as.integer(wday(trandate))
2013-06-27 11:19:02 > holiday <- mapply(isHoliday, trandate)
You can use addTaskCallback as follows to create a log of each top level execution.
.log <- data.frame(time=character(0), expr=character(0))
.logger <- function(expr, value, ok, visible) { # formals described in ?addTaskCallback
time <- as.character(Sys.time())
expr <- deparse(expr)
.log <<- rbind(.log, data.frame(time, expr))
return(TRUE) # required of task callback functions
}
.save.log <- function() {
if (exists('.logger')) write.csv(.log, 'log.csv')
}
addTaskCallback(.logger)
x <- 1:10
y <- mean(x)
.save.log()
.log
# time expr
# 1 2013-06-27 12:01:45.837 addTaskCallback(.logger)
# 2 2013-06-27 12:01:45.866 x <- 1:10
# 3 2013-06-27 12:01:45.876 y <- mean(x)
# 4 2013-06-27 12:01:45.900 .save.log()
Of course instead of committing the cardinal sin of growing a data.frame row-wise, as I have here, you could just leave a connection open and write directly to file, closing the connection with on.exit.
And if you want to be tidy about it, you can pack the logging setup into a function pretty nicely.
.log <- function() {
.logger <<- local({
log <- data.frame(time=character(0), expr=character(0))
function(expr, value, ok, visible) {
time <- as.character(Sys.time())
expr <- deparse(expr)
log <<- rbind(log, data.frame(time, expr))
return(TRUE)
}
})
invisible(addTaskCallback(.logger))
}
.save.log <- function() {
if (exists('.logger'))
write.csv(environment(.logger)$log, 'log.csv')
}
.log()
x <- 1:10
y <- mean(x)
.save.log()
See ?Sys.time. It returns a POSIXct datetime, which you'll need to format when outputting to a log file.
cat(format(Sys.time()), " is the current time\n")
I have an R script where I have inserted the following code:
options(Debug=TRUE)
#SOME MORE CODE
browser(expr = isTRUE(getOption("Debug")))
#SOME MORE CODE
After the debugger starts, I would like it to proceed to the next line so I type n. However, this does not proceed to the next line but rather seems to continue.
How do I step through the remainder of my code after a browser() statement?
Thanks
To set a point within a function at which to begin debugging, you'll likely want to use trace().
Let's say you have a function myFun and want to begin debugging it right before its call to plot():
myFun <- function() {
x <-
8:1
y <-
1:8
plot(y~x)
lines(y~x)
text(x,y, letters[1:8], pos=3)
}
To construct the call to trace, you will need to know at which step in myFun the call to plot() occurs. To determine that, use the construct as.list(body(myFun)):
as.list(body(myFun))
# [[1]]
# `{`
#
# [[2]]
# x <- 8:1
#
# [[3]]
# y <- 1:8
#
# [[4]]
# plot(y ~ x)
#
# ... More ...
After noting that the call to plot occurs in step 4, you can use trace() to tell R that you'd like to enter a browser right before step 4 every time myFun is called:
trace(myFun, browser, 4)
# TRY IT OUT
# (Once in the browser, type "n" and press Enter to step through the code.)
myFun()
Finally, when you're done debugging the function, turn the trace off with a call to untrace(myFun).
EDIT: The strategy for setting breakpoints for sourced-in scripts is similar. Again, you don't actually insert code into the script. Instead use findLineNum() and setBreakPoint().
Let's say that the function myFun() described above is defined in the text file "myScript.R", which has five blank lines before the function definitions. To insert the breakpoint right before the call to plot:
source("myScript.R") # Must source() once before using findLineNum
# or setBreakPoint
findLineNum("myScript.R#10") # I see that I missed the step by one line
setBreakpoint("myScript.R#11") # Insert the breakpoint at the line that calls
# plot()
myFun() # Test that breakpoint was properly inserted
# (Again, use "n" and Enter to step through code)
browser() is generally for use when running in interactive mode and used in a sub function because if you have it inline in a script and source the whole thing in it will simply execute the next line against the browser prompt when it is called.
E.g. assuming the script:
options(Debug=TRUE)
browser(expr = isTRUE(getOption("Debug")))
b <- 1
b <- 2
b <- 3
It would execute like this:
R> options(Debug=TRUE)
R> browser(expr = isTRUE(getOption("Debug")))
Called from: top level
Browse[1]> b <- 1
Browse[1]> b <- 2
Browse[1]> b <- 3
If you were to run the script step by step and then call a function as so then it's use makes more sense:
R> options(Debug=TRUE)
R> a <- function() {
browser(expr = isTRUE(getOption("Debug")))
b <- 1
b <- 2
b <- 3
return(b)
}
R> e <- a()
Called from: a()
Browse[1]> n
debug at #5: b <- 1
Browse[2]> # ENTER
debug at #6: b <- 2
Browse[2]> b
[1] 1
Browse[2]> # ENTER
debug at #7: b <- 3
Browse[2]> b
[1] 2
Browse[2]> # ENTER
debug at #8: return(b)
Browse[2]> b
[1] 3
Browse[2]> # ENTER
[1] 3
R>