How can I store "image lookup -v address" result inside a variable? - shell

I am able to symbolicate symbol address through following lldb command:
image lookup --address $SYMBOL_ADDRRESS
But while writing a shell script to parse, I am not able to find a way to store the output of above command into a variable or file.

First off, if your script's job is mostly about driving lldb and you happen to know Python, you will be much happier using the lldb module in Python, where you can drive the debugger directly, than getting lldb to produce text output which you parse in the shell script.
The lldb Python module provides API's like SBTarget.ResolveSymbolContextForAddress, which runs the same lookup as image lookup --address but returns the result as a Python lldb.SBSymbolContext object, which you can either query for module/file/line etc using API's on the object. So getting bits of info out of this result will be easier with the lldd API's.
But if you have to use a shell script, then the easiest thing is probably to write the command output to a file and read that back into the shell script. lldb doesn't have generic support for tee-ing command output into a log file yet, but the lldb Python module allows you to run command-line commands and programmatically capture the output.
So you can do it easily from lldb's Python script interpreter:
(lldb) script
Python Interactive Interpreter. To exit, type 'quit()', 'exit()' or Ctrl-D.
>>> result = lldb.SBCommandReturnObject()
>>> lldb.debugger.GetCommandInterpreter().HandleCommand("image lookup -va $pc", result)
2
>>> fh = open("/tmp/out.txt", "w")
>>> fh.write(result.GetOutput())
>>> fh.close()
>>> quit
(lldb) plat shell cat /tmp/out.txt
Address: foo[0x0000000100003f6f] (foo.__TEXT.__text + 15)
Summary: foo`main + 15 at foo.c:6:3
Module: file = "/tmp/foo", arch = "x86_64"
CompileUnit: id = {0x00000000}, file = "/tmp/foo.c", language = "c99"
Function: id = {0x7fffffff00000032}, name = "main", range = [0x0000000100003f60-0x0000000100003f8a)
FuncType: id = {0x7fffffff00000032}, byte-size = 0, decl = foo.c:4, compiler_type = "int (void)"
Blocks: id = {0x7fffffff00000032}, range = [0x100003f60-0x100003f8a)
LineEntry: [0x0000000100003f6f-0x0000000100003f82): /tmp/foo.c:6:3
Symbol: id = {0x00000005}, range = [0x0000000100003f60-0x0000000100003f8a), name="main"
You can also write a lldb command in Python that wraps this bit of business, which would make it easier to use. Details on that are here:
https://lldb.llvm.org/use/python-reference.html#create-a-new-lldb-command-using-a-python-function
You could even do a hybrid approach, and make all the lldb work you want to do a custom Python command. That would allow you to use the lldb Python API's to get what info you needed and write it out in whatever format is convenient for you, and would simplify the lldb invocation in your shell script and facilitate recovering the information lldb provided...

Related

Adobe Illustrator: Run python file from Extendscript in Windows

I am facing issue while running this script(time.jsx):
var timeStr = system.callSystem("cmd.exe /c \"time /t\"");
alert("Current time is " + timeStr); Documentation of AE
it works in Adobe After Effects but I want to use it specifically in illustrator. Basically, i want to run my Python script from Extendscript(.jsx). But I couldn't find any solution to do so yet.
Your help is appreciated.
Thanks in Advance.
i have found a way to execute Python or other scripts in Extendscripts(*.jsx), and that is it, there is named File object in documentation which has a function named execute(); which executes the scripts according to their statement. For example, you want to execute hello world in python through .jsx file, you need to make a py file including print("hello world"). After that, add these lines in the script.jsx script:
var pyHello = new File("<path of py file>");
var bool = pyHello.execute();
alert(bool);
If script executed, it would be true, otherwise, false,

Use Bash's select from within Python

The idea of the following was to use Bash's select from Python, e.g. use Bash select to get the input from the user, communicate with the Bash script to get the user selections and use it afterwords in the Python code. Please tell me if it at least possible.
Have the following simple Bash script:
#!/bin/bash -x
function select_target {
target_list=("Target1" "Target2" "Target3")
PS3="Select Target: "
select target in "${target_list[#]}"; do
break
done
echo $target
}
select_target
it works standalone
Now I tried to call it from Python like this:
import tempfile
import subprocess
select_target_sh_func = """
#!/bin/bash
function select_target {
target_list=(%s)
PS3="Select Target: "
select target in "${target_list[#]}"; do
break
done
echo $target
}
select_target
"""
target_list = ["Target1", "Target2", "Target3"]
with tempfile.NamedTemporaryFile() as temp:
temp.write(select_target_sh_func % ' '.join(map(lambda s : '\"%s\"' % str(s),target_list)))
subprocess.call(['chmod', '0777', temp.name])
sh_proc = subprocess.Popen(["bash", temp.name], stdout=subprocess.PIPE)
(output, err) = sh_proc.communicate()
exit_code = sh_proc.wait()
print output
It does nothing. No output, no selection.
I'm using High Sierra MacOS, PyCharm and Python 2.7.
PS
After some reading and experimenting ended up with the following:
with tempfile.NamedTemporaryFile() as temp:
temp.write(select_target_sh_func % ' '.join(map(lambda s : '\"%s\"' % str(s),target_list)))
temp.flush()
# bash: /var/folders/jm/4j4mq_w52bx2l5qwg4gt44580000gn/T/tmp00laDV: Permission denied
subprocess.call(['chmod', '0500', temp.name])
sh_proc = subprocess.Popen(["bash", "-c", temp.name], stdout=subprocess.PIPE)
(output, err) = sh_proc.communicate()
exit_code = sh_proc.wait()
print output
It behaves as I expected it would, the user is able to select the 'target' by just typing the number. My mistake was that I forgot to flush.
PPS
The solution works for MacOS X High Sierra, sadly it does not for Debian Jessie complaining the following:
bash: /tmp/tmpdTv4hp: Text file busy
I believe it is because `with tempfile.NamedTemporaryFile' keeps the temp file open and this somehow prevents Bash from working with it. This renders the whole idea useless.
Python is sitting between your terminal or console and the (noninteractive!) Bash process you are starting. Furthermore, you are failing to direct the standard output pipe anywhere, so subprocess.communicate() actually cannot capture standard error (and if it could, you would not be able to see the script's menu).
Running an interactive process programmatically is a nontrivial scenario; you'll want to look at pexpect or just implement your own select command in Python - I suspect this is going to turn out to be the easiest solution (trivially so if you can find an existing library).

Determine compiler name/version from gdb

I share my .gdbinit script (via NFS) across machines running different versions of gcc. I would like some gdb commands to be executed if the code I am debugging has been compiled with a specific compiler version. Can gdb do that?
I came up with this:
define hook-run
python
from subprocess import Popen, PIPE
from re import search
# grab the executable filename from gdb
# this is probably not general enough --
# there might be several objfiles around
objfilename = gdb.objfiles()[0].filename
# run readelf
process = Popen(['readelf', '-p', '.comment', objfilename], stdout=PIPE)
output = process.communicate()[0]
# match the version number with a
regex = 'GCC: \(GNU\) ([\d.]+)'
match=search(regex, output)
if match:
compiler_version = match.group(1)
gdb.execute('set $compiler_version="'+str(compiler_version)+'"')
gdb.execute('init-if-undefined $compiler_version="None"')
# do what you want with the python compiler_version variable and/or
# with the $compiler_version convenience variable
# I use it to load version-specific pretty-printers
end
end
It is good enough for my purpose, although it is probably not general enough.

ipython notebook : how to parallelize external script

I'm trying to use parallel computing from ipython parallel library. But I have little knowledge about it and I find the doc difficult to read from someone who knows nothing about parallel computing.
Funnily, all tutorials I found just re-use the example in the doc, with the same explanation, which from my point of view, is useless.
Basically what I'd like to do is running few scripts in background so they are executed in the same time. In bash it would be something like :
for my_file in $(cat list_file); do
python pgm.py my_file &
done
But bash interpreter of Ipython notebook doesn't handle the background mode.
It seems that solution was to use parallel library from ipython.
I tried :
from IPython.parallel import Client
rc = Client()
rc.block = True
dview = rc[:2] # I take only 2 engines
But then I'm stuck. I don't know how to run twice (or more) the same script or pgm at the same time.
Thanks.
One year later, I eventually managed to get what I wanted.
1) Create a function with what you want to do on the different cpu. Here it is just calling a script from the bash with the ! magic ipython command. I guess it would work with the call() function.
def my_func(my_file):
!python pgm.py {my_file}
Don't forget the {} when using !
Note also that the path to my_file should be absolute, since the clusters are where you started the notebook (when doing jupyter notebook or ipython notebook) which is not necessarily where you are.
2) Start your ipython notebook Cluster with the number of CPU you want.
Wait 2s and execute the following cell:
from IPython import parallel
rc = parallel.Client()
view = rc.load_balanced_view()
3) Get a list of file you want to process:
files = list_of_files
4) Map asynchronously your function with all your files to the view of your engines you just created. (not sure of the wording).
r = view.map_async(my_func, files)
While it's running you can do something else on the notebook (It runs in "background"!). You can also call r.wait_interactive() that enumerates interactively the number of files processed and the number of time spent so far and the number of files left. This will prevent you to run other cells (but you can interrupt it).
And if you have more files than engines, no worries, they will be processed as soon as an engine finishes with 1 file.
Hope this will help others !
This tutorial might be of some help:
http://nbviewer.ipython.org/github/minrk/IPython-parallel-tutorial/blob/master/Index.ipynb
Note also that I still have IPython 2.3.1, I don't know if it changed since Jupyter.
Edit: Still works with Jupyter, see here for difference and potential issues you may encounter
Note that if you use external libraries in your function, you need to import them on the different engines with:
%px import numpy as np
or
%%px
import numpy as np
import pandas as pd
Same with variable and other functions, you need to push them to the engine name space:
rc[:].push(dict(
foo=foo,
bar=bar))
If you're trying to executing some external scripts in parallel, you don't need to use IPython's parallel functionality. Replicating bash's parallel execution can be achieved with the subprocess module as follows:
import subprocess
procs = []
for i in range(10):
procs.append(subprocess.Popen(['ls', '/Users/shad/tmp/'], stdout=subprocess.PIPE))
results = []
for proc in procs:
stdout, _ = proc.communicate()
results.append(stdout)
Be wary that if your subprocess generates a lot of output, the process will block. If you print the output (results) you get:
print results
['file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n']

How to get R script line numbers at error?

If I am running a long R script from the command line (R --slave script.R), then how can I get it to give line numbers at errors?
I don't want to add debug commands to the script if at all possible; I just want R to behave like most other scripting languages.
This won't give you the line number, but it will tell you where the failure happens in the call stack which is very helpful:
traceback()
[Edit:] When running a script from the command line you will have to skip one or two calls, see traceback() for interactive and non-interactive R sessions
I'm not aware of another way to do this without the usual debugging suspects:
debug()
browser()
options(error=recover) [followed by options(error = NULL) to revert it]
You might want to look at this related post.
[Edit:] Sorry...just saw that you're running this from the command line. In that case I would suggest working with the options(error) functionality. Here's a simple example:
options(error = quote({dump.frames(to.file=TRUE); q()}))
You can create as elaborate a script as you want on an error condition, so you should just decide what information you need for debugging.
Otherwise, if there are specific areas you're concerned about (e.g. connecting to a database), then wrap them in a tryCatch() function.
Doing options(error=traceback) provides a little more information about the content of the lines leading up to the error. It causes a traceback to appear if there is an error, and for some errors it has the line number, prefixed by #. But it's hit or miss, many errors won't get line numbers.
Support for this will be forthcoming in R 2.10 and later. Duncan Murdoch just posted to r-devel on Sep 10 2009 about findLineNum and setBreapoint:
I've just added a couple of functions to R-devel to help with
debugging. findLineNum() finds which line of which function
corresponds to a particular line of source code; setBreakpoint() takes
the output of findLineNum, and calls trace() to set a breakpoint
there.
These rely on having source reference debug information in the code.
This is the default for code read by source(), but not for packages.
To get the source references in package code, set the environment
variable R_KEEP_PKG_SOURCE=yes, or within R, set
options(keep.source.pkgs=TRUE), then install the package from source
code. Read ?findLineNum for details on how to tell it to search
within packages, rather than limiting the search to the global
environment.
For example,
x <- " f <- function(a, b) {
if (a > b) {
a
} else {
b
}
}"
eval(parse(text=x)) # Normally you'd use source() to read a file...
findLineNum("<text>#3") # <text> is a dummy filename used by
parse(text=)
This will print
f step 2,3,2 in <environment: R_GlobalEnv>
and you can use
setBreakpoint("<text>#3")
to set a breakpoint there.
There are still some limitations (and probably bugs) in the code; I'll
be fixing thos
You do it by setting
options(show.error.locations = TRUE)
I just wonder why this setting is not a default in R? It should be, as it is in every other language.
Specifying the global R option for handling non-catastrophic errors worked for me, along with a customized workflow for retaining info about the error and examining this info after the failure. I am currently running R version 3.4.1.
Below, I've included a description of the workflow that worked for me, as well as some code I used to set the global error handling option in R.
As I have it configured, the error handling also creates an RData file containing all objects in working memory at the time of the error. This dump can be read back into R using load() and then the various environments as they existed at the time of the error can be inspected interactively using debugger(errorDump).
I will note that I was able to get line numbers in the traceback() output from any custom functions within the stack, but only if I used the keep.source=TRUE option when calling source() for any custom functions used in my script. Without this option, setting the global error handling option as below sent the full output of the traceback() to an error log named error.log, but line numbers were not available.
Here's the general steps I took in my workflow and how I was able to access the memory dump and error log after a non-interactive R failure.
I put the following at the top of the main script I was calling from the command line. This sets the global error handling option for the R session. My main script was called myMainScript.R. The various lines in the code have comments after them describing what they do. Basically, with this option, when R encounters an error that triggers stop(), it will create an RData (*.rda) dump file of working memory across all active environments in the directory ~/myUsername/directoryForDump and will also write an error log named error.log with some useful information to the same directory. You can modify this snippet to add other handling on error (e.g., add a timestamp to the dump file and error log filenames, etc.).
options(error = quote({
setwd('~/myUsername/directoryForDump'); # Set working directory where you want the dump to go, since dump.frames() doesn't seem to accept absolute file paths.
dump.frames("errorDump", to.file=TRUE, include.GlobalEnv=TRUE); # First dump to file; this dump is not accessible by the R session.
sink(file="error.log"); # Specify sink file to redirect all output.
dump.frames(); # Dump again to be able to retrieve error message and write to error log; this dump is accessible by the R session since not dumped to file.
cat(attr(last.dump,"error.message")); # Print error message to file, along with simplified stack trace.
cat('\nTraceback:');
cat('\n');
traceback(2); # Print full traceback of function calls with all parameters. The 2 passed to traceback omits the outermost two function calls.
sink();
q()}))
Make sure that from the main script and any subsequent function calls, anytime a function is sourced, the option keep.source=TRUE is used. That is, to source a function, you would use source('~/path/to/myFunction.R', keep.source=TRUE). This is required for the traceback() output to contain line numbers. It looks like you may also be able to set this option globally using options( keep.source=TRUE ), but I have not tested this to see if it works. If you don't need line numbers, you can omit this option.
From the terminal (outside R), call the main script in batch mode using Rscript myMainScript.R. This starts a new non-interactive R session and runs the script myMainScript.R. The code snippet given in step 1 that has been placed at the top of myMainScript.R sets the error handling option for the non-interactive R session.
Encounter an error somewhere within the execution of myMainScript.R. This may be in the main script itself, or nested several functions deep. When the error is encountered, handling will be performed as specified in step 1, and the R session will terminate.
An RData dump file named errorDump.rda and and error log named error.log are created in the directory specified by '~/myUsername/directoryForDump' in the global error handling option setting.
At your leisure, inspect error.log to review information about the error, including the error message itself and the full stack trace leading to the error. Here's an example of the log that's generated on error; note the numbers after the # character are the line numbers of the error at various points in the call stack:
Error in callNonExistFunc() : could not find function "callNonExistFunc"
Calls: test_multi_commodity_flow_cmd -> getExtendedConfigDF -> extendConfigDF
Traceback:
3: extendConfigDF(info_df, data_dir = user_dir, dlevel = dlevel) at test_multi_commodity_flow.R#304
2: getExtendedConfigDF(config_file_path, out_dir, dlevel) at test_multi_commodity_flow.R#352
1: test_multi_commodity_flow_cmd(config_file_path = config_file_path,
spot_file_path = spot_file_path, forward_file_path = forward_file_path,
data_dir = "../", user_dir = "Output", sim_type = "spot",
sim_scheme = "shape", sim_gran = "hourly", sim_adjust = "raw",
nsim = 5, start_date = "2017-07-01", end_date = "2017-12-31",
compute_averages = opt$compute_averages, compute_shapes = opt$compute_shapes,
overwrite = opt$overwrite, nmonths = opt$nmonths, forward_regime = opt$fregime,
ltfv_ratio = opt$ltfv_ratio, method = opt$method, dlevel = 0)
At your leisure, you may load errorDump.rda into an interactive R session using load('~/path/to/errorDump.rda'). Once loaded, call debugger(errorDump) to browse all R objects in memory in any of the active environments. See the R help on debugger() for more info.
This workflow is enormously helpful when running R in some type of production environment where you have non-interactive R sessions being initiated at the command line and you want information retained about unexpected errors. The ability to dump memory to a file you can use to inspect working memory at the time of the error, along with having the line numbers of the error in the call stack, facilitate speedy post-mortem debugging of what caused the error.
First, options(show.error.locations = TRUE) and then traceback(). The error line number will be displayed after #

Resources