I'm dealing with some GB-sized numpy arrays in IPython. When I delete them, I definitely want them gone, in order to recover the memory. IPythons output cache is quite annoying there, as it keeps the objects alive even after deleting the last actively intended reference to them. I already set
c.TerminalInteractiveShell.cache_size = 0
in the IPython configuration, but this only disables caching of entries to _oh, the other variables like _, __ and so on are still created. I'm also aware of %xdel, but anyways, I'd prefer to disable it completely, as I rarely use the output history anyways, so that a plain del would work again right away.
Looking at IPython/core/displayhook.py Line 209-214 I would say that it is not configurable. You could try making a PR to add an option to disable it totally.
Enter
echo "__builtin__._ = True" > ~/.config/ipython/profile_default/startup/00-disable-history.py
and your history should be gone.
Edit:
Seems like the path to the config directory is sometimes a bit different, either ~/.config/ipython or just ~/.ipython/. So just check which one you got and adjust the path accordingly. The solution still works with jupyter console.
Seems that we can suppress the output cache by putting a ";" at the end of the line now.
See http://ipython.org/ipython-doc/stable/interactive/tips.html#suppress-output
Create an ipython profile:
!ipython profile create
The output might be (for ipython v4.0):
[ProfileCreate] Generating default config file: '/root/.ipython/profile_default/ipython_config.py'
[ProfileCreate] Generating default config file: '/root/.ipython/profile_default/ipython_kernel_config.py'
Then add the line 'c.InteractiveShell.cache_size = 0' to the ipython_kernel_config.py file by
!echo 'c.InteractiveShell.cache_size = 0' >> /root/.ipython/profile_default/ipython_kernel_config.py
Load another ipython kernel and check if this work
In [1]: 123
Out[1]: 123
In [2]: _1
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-51-21553803e553> in <module>()
----> 1 _1
NameError: name '_1' is not defined
In [3]: len(Out)
Out[3]: 0
Related
I am trying to conditionally load some files from a directory. I would like to have a progress bar from tqdm on the process. I currently running this:
loaddir = r'D:\Folder'
# loop the files in the directory
print('Data load initiated')
for subdir, dirs, files in os.walk(loaddir_res):
for name in tqdm(files):
if name.startswith('Test'):
#do things
which gives
Data load initiated
0%| | 0/6723 [00:00<?, ?it/s]
0%| | 26/6723 [00:00<00:28, 238.51it/s]
1%| | 47/6723 [00:00<00:31, 213.62it/s]
1%| | 72/6723 [00:00<00:30, 220.84it/s]
1%|▏ | 91/6723 [00:00<00:31, 213.59it/s]
2%|▏ | 115/6723 [00:00<00:30, 213.73it/s]
This has two problems:
When progress is updated a new line appears in my IPython console in Spyder
I am actually timing the loop over the files and not over the files that start with 'Test' and therefore progress and remaining time are not accurate.
However, if I try this:
loaddir = r'D:\Folder'
# loop the files in the directory
print('Data load initiated')
for subdir, dirs, files in os.walk(loaddir_res):
for name in files:
if tqdm(name.startswith('Test')):
#do things
I get the following error.
Traceback (most recent call last):
File "<ipython-input-80-b801165d4cdb>", line 21, in <module>
if tqdm(name.startswith('Probe')):
TypeError: 'NoneType' object cannot be interpreted as an integer
I would like to have a progress bar in only one line that updates whenever the startswith loop is activated.
----UPDATE----
I also found out here that it can also be used like this:
files = [f for f in tqdm(files) if f.startswith('Test')]
Which allows to track progress with list comprehension by wrapping the iterable with tqdm. However in spyder this results in a separate line for each progress update.
----UPDATE2----
It actually works fine in spyder. Sometimes if the loop fails, it might go back to printing one line of progress update. But i haven't seen this very often after the latest updates.
firstly the answer:
loaddir = r'D:\surfdrive\COMSOL files\Batch folder\Current batch simulation files'
# loop the files in the directory
print('Data load initiated')
for subdir, dirs, files in os.walk(loaddir_res):
files = [f for f in files if f.startswith('Test')]
for name in tqdm(files):
#do things
This will work in any decent environment (including a bare terminal). The solution is to not give tqdm the unused filenames. You may find https://github.com/tqdm/tqdm/wiki/How-to-make-a-great-Progress-Bar insightful.
Secondly the issue with multiple lines output is well-known and due to some environments being broken (https://github.com/tqdm/tqdm#faq-and-known-issues) by not supporting carriage return (\r).
The correct links for this problem in Spyder are https://github.com/tqdm/tqdm/issues/512 and https://github.com/spyder-ide/spyder/issues/6172
(Spyder maintainer here) This is a known limitation of TQDM progress bars in Spyder. I'd recommend you to open an issue about it in its Github repository.
Specify position=0 and leave=True like this:
for i in tqdm(range(10), position=0, leave=True):
# Some code
Or in a list comprehension:
nums = [i for i in tqdm(range(10), position=0, leave=True)]
It's worth to mention that you can set `position=0` and `leave=True` to be the default settings, so you won't need to specify them each time, like this:
from tqdm import tqdm
from functools import partial
tqdm = partial(tqdm, position=0, leave=True) # this line does the magic
# for loop
for i in tqdm(range(10)):
# Some code
# list comprehension
nums = [for i in tqdm(range(10))]
This may be an old bug; I found this report. I'm using Sublime 3 but I think this code also works on 2.
When I call self.view.run_command('save') within a plugin, the save does happen -- I can type the file in a console window and see the results. The dirty flag seems to get cleared. But the tab for the file contains a dot rather than an x, indicating the file hasn't been saved. And sure enough, if you try to close it, it asks if you want to save the file.
Is there any way to refresh the file window so it recognizes that the file has been saved?
Here's my plugin code: (This is my first plugin so please excuse obvious style issues)
# Sublime Text plugin to insert output in the OUTPUT_SHOULD_BE comment
# Bind to key with:
# { "keys": ["f12"], "command": "insert_output" },
import sublime, sublime_plugin, pprint, os, re
class InsertOutputCommand(sublime_plugin.TextCommand):
def run(self, edit):
outfile = self.view.file_name().rsplit('.')[0] + ".out"
if not os.path.exists(outfile):
sublime.error_message("Not Found: " + outfile)
return
out_data = open(outfile).read().strip()
region = self.view.find(r"/\* OUTPUT_SHOULD_BE\n", 0)
if region:
self.view.insert(edit, region.end(), out_data)
self.view.run_command('save')
self.view.window().focus_view(self.view)
else:
sublime.error_message("Not Found: OUTPUT_SHOULD_BE")
I'm sure this is probably a terrible hack, but it works:
self.view.run_command("save")
# Refresh the buffer and clear the dirty flag:
sublime.set_timeout(lambda: self.view.run_command("revert"), 10)
The revert command, which must be delayed in order to work, simply brings back whatever is stored in the file. Since the file was successfully saved on disk, this is just the same file that we already see on the screen. In the process, the dirty flag is cleared and the dot on the file tab becomes an x.
Feels very hacky to me and I'd love a more proper solution. But at least it works, ugly or not.
This example routine generates two Throw::nocatch warning messages in the kernel window. Can they be handled somehow?
The example consists of this code in a file "test.m" created in C:\Temp:
Needs["JLink`"];
$FrontEndLaunchCommand = "Mathematica.exe";
UseFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
Then these commands pasted and run at the Windows Command Prompt:
PATH = C:\Program Files\Wolfram Research\Mathematica\8.0\;%PATH%
start MathKernel -noprompt -initfile "C:\Temp\test.m"
Addendum
The reason for using UseFrontEnd as opposed to UsingFrontEnd is that an interactive front end may be required to preserve output and messages from notebooks that are usually run interactively. For example, with C:\Temp\test.m modified like so:
Needs["JLink`"];
$FrontEndLaunchCommand="Mathematica.exe";
UseFrontEnd[
nb = NotebookOpen["C:\\Temp\\run.nb"];
SelectionMove[nb, Next, Cell];
SelectionEvaluate[nb];
];
Pause[10];
CloseFrontEnd[];
and a notebook C:\Temp\run.nb created with a single cell containing:
x1 = 0; While[x1 < 1000000,
If[Mod[x1, 100000] == 0,
Print["x1=" <> ToString[x1]]]; x1++];
NotebookSave[EvaluationNotebook[]];
NotebookClose[EvaluationNotebook[]];
this code, launched from a Windows Command Prompt, will run interactively and save its output. This is not possible to achieve using UsingFrontEnd or MathKernel -script "C:\Temp\test.m".
During the initialization, the kernel code is in a mode which prevents aborts.
Throw/Catch are implemented with Abort, therefore they do not work during initialization.
A simple example that shows the problem is to put this in your test.m file:
Catch[Throw[test]];
Similarly, functions like TimeConstrained, MemoryConstrained, Break, the Trace family, Abort and those that depend upon it (like certain data paclets) will have problems like this during initialization.
A possible solution to your problem might be to consider the -script option:
math.exe -script test.m
Also, note that in version 8 there is a documented function called UsingFrontEnd, which does what UseFrontEnd did, but is auto-configured, so this:
Needs["JLink`"];
UsingFrontEnd[NotebookWrite[CreateDocument[], "Testing"]];
should be all you need in your test.m file.
See also: Mathematica Scripts
Addendum
One possible solution to use the -script and UsingFrontEnd is to use the 'run.m script
included below. This does require setting up a 'Test' kernel in the kernel configuration options (basically a clone of the 'Local' kernel settings).
The script includes two utility functions, NotebookEvaluatingQ and NotebookPauseForEvaluation, which help the script to wait for the client notebook to finish evaluating before saving it. The upside of this approach is that all the evaluation control code is in the 'run.m' script, so the client notebook does not need to have a NotebookSave[EvaluationNotebook[]] statement at the end.
NotebookPauseForEvaluation[nb_] := Module[{},While[NotebookEvaluatingQ[nb],Pause[.25]]]
NotebookEvaluatingQ[nb_]:=Module[{},
SelectionMove[nb,All,Notebook];
Or##Map["Evaluating"/.#&,Developer`CellInformation[nb]]
]
UsingFrontEnd[
nb = NotebookOpen["c:\\users\\arnoudb\\run.nb"];
SetOptions[nb,Evaluator->"Test"];
SelectionMove[nb,All,Notebook];
SelectionEvaluate[nb];
NotebookPauseForEvaluation[nb];
NotebookSave[nb];
]
I hope this is useful in some way to you. It could use a few more improvements like resetting the notebook's kernel to its original and closing the notebook after saving it,
but this code should work for this particular purpose.
On a side note, I tried one other approach, using this:
UsingFrontEnd[ NotebookEvaluate[ "c:\\users\\arnoudb\\run.nb", InsertResults->True ] ]
But this is kicking the kernel terminal session into a dialog mode, which seems like a bug
to me (I'll check into this and get this reported if this is a valid issue).
I am trying to debug a problem where users occasionally have locked files which they try to open. The code appears to have correct exception handling but users are still reporting seeing error messages. How can I simulate a locked file so that I can debug this myself?
EDIT: For Windows.
try this:
( >&2 pause ) >> yourfile.txt
>> opens yourfile.txt in append mode
see this for a reference
depends, but in case, MS word locks
if you are wonderig if your application lock files and it do not relase locks:
just modify a bit your aplication (to create a testapp) and never close the file (and keep it runnig)
I used LockFileEx function from the Windows API to write a unittest in Python. This worked well for me (shutil.copy() with a locked target fails).
import win32con
import win32file
import pywintypes
p = "yourfile.txt"
f = file(p, "w")
hfile = win32file._get_osfhandle(f.fileno())
flags = win32con.LOCKFILE_EXCLUSIVE_LOCK | win32con.LOCKFILE_FAIL_IMMEDIATELY
win32file.LockFileEx(hfile, flags, 0, 0xffff0000, pywintypes.OVERLAPPED())
See: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365203%28v=vs.85%29.aspx
I am using this command prompt
type yourfile.txt | more
to insert inside yourfile.txt a very long text of "few pages" and want to pipe via | more to batch the insertion.
However the file seems to lock, any reason why?
If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #