LLDB: How to define a function with arguments in .lldbinit? - debugging

I would like write a helper function which to be available in my LLDB session. (I am not talking about python here)
This function will invoke methods of current program variables and then pass them to a python script.
I guess i understand how to write a python script, but i am still not sure how to write an lldb-script which interacts with my program.

For a general intro on how to use the lldb Python module to interact with your program, see:
https://lldb.llvm.org/use/python-reference.html
That will show you some different ways you can use Python in lldb, and particularly how to make Python based commands and load them into the lldb command interpreter.
There are a variety of example scripts that you can look at here:
https://github.com/llvm/llvm-project/tree/main/lldb/examples/python
There's an on-line version of the Python API help here:
https://lldb.llvm.org/python_api.html
and you can access the same information from within lldb by doing:
(lldb) script
Python Interactive Interpreter. To exit, type 'quit()', 'exit()' or Ctrl-D.
>>> help(lldb)
Help on package lldb:
NAME
lldb
FILE
/Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources/Python/lldb/__init__.py
DESCRIPTION
...

Related

Why won't an external executable run without manual input from the terminal command line?

I am currently writing a Python script that will pipe some RNA sequences (strings) into a UNIX executable, which, after processing them, will then send the output back into my Python script for further processing. I am doing this with the subprocess module.
However, in order for the executable to run, it must also have some additional arguments provided to it. Using the subprocess.call method, I have been trying to run:
import subprocess
seq= "acgtgagtag"
output= subprocess.Popen(["./DNAanalyzer", seq])
Despite having my environmental variables set properly, the executables running without problem from the command line of the terminal, and the subprocess module functioning normally (e.g. subprocess.Popen(["ls"]) works just fine), the Unix executable prints out the same output:
Failed to open input file acgtgagtag.in
Requesting input manually.
There are a few other Unix executables in this package, and all of them behave the same way. I even tried to create a simple text file containing the sequence and specify it as the input in both the Python script as well as within the command line, but the executables only want manual input.
I have looked through the package's manual, but it does not mention why the executables can ostensibly be only run through the command line. Because I have limited experience with this module (and Python in general), can anybody indicate what the best approach to this problem would be?
The Popen() is actually a constructor for an object – that object being a "sub-shell" that directly runs the executable. But because I didn't set a standard input or output (stdin and stdout), they default to None, meaning that the process's I/O are both closed.
What I should have done is pass subprocess.PIPE to signify to the Popen object that I want to pipe input and output between my program and the process.
Additionally, the environment variables of the script (in the main shell) were not the same as the environment variables of the subshell, and these specific executables needed certain environment variables in order to function (in this case, it was the path to the parameter files in its package). This was done in the following fashion:
import subprocess as sb
seq= "acgtgagtag"
my_env= {BIOPACKAGEPATH: "/Users/Bobmcbobson/Documents/Biopackage/"}
p= sb.Popen(['biopackage/bin/DNAanalyzer'], stdin=sb.PIPE, stdout=sb.PIPE, env=my_env)
strb = (seq + '\n').encode('utf-8')
data = p.communicate(input=strb)
After creating the Popen object, we send it a formatted input string using communicate(). The output can now be read, and further processed in whatever way in the script.

Coding a relative path to file in OS X [duplicate]

I have a Haskell script that runs via a shebang line making use of the runhaskell utility. E.g...
#! /usr/bin/env runhaskell
module Main where
main = do { ... }
Now, I'd like to be able to determine the directory in which that script resides from within the script, itself. So, if the script lives in /home/me/my-haskell-app/script.hs, I should be able to run it from anywhere, using a relative or absolute path, and it should know it's located in the /home/me/my-haskell-app/ directory.
I thought the functionality available in the System.Environment module might be able to help, but it fell a little short. getProgName did not seem to provide useful file-path information. I found that the environment variable _ (that's an underscore) would sometimes contain the path to the script, as it was invoked; however, as soon as the script is invoked via some other program or parent script, that environment variable seems to lose its value (and I am needing to invoke my Haskell script from another, parent application).
Also useful-to-know would be whether I can determine the directory in which a pre-compiled Haskell executable lives, using the same technique or otherwise.
As I understand it, this is historically tricky in *nix. There are libraries for some languages to provide this behavior, including FindBin for Haskell:
http://hackage.haskell.org/package/FindBin
I'm not sure what this will report with a script though. Probably the location of the binary that runhaskell compiled just prior to executing it.
Also, for compiled Haskell projects, the Cabal build system provides data-dir and data-files and the corresponding generated Paths_<yourproject>.hs for locating installed files for your project at runtime.
http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/authors.html#paths-module
There is a FindBin package which seems to suit your needs and it also works for compiled programs.
For compiled executables, In GHC 7.6 or later you can use System.Environment.getExecutablePath.
getExecutablePath :: IO FilePathSource
Returns the absolute pathname of the current executable.
Note that for scripts and interactive sessions, this is the path to the
interpreter (e.g. ghci.)
There is executable-path which worked with my runghc script. FindBin didn't work for me as it returned my current directory instead of the script dir.
I could not find a way to determine script path from Haskell (which is a real pity IMHO). However, as a workaround, you can wrap your Haskell script inside a shell script:
#!/bin/sh
SCRIPT_DIR=`dirname $0`
runhaskell <<EOF
main = putStrLn "My script is in \"$SCRIPT_DIR\""
EOF

How to invoke bash or shell scripts from a haskell program?

I'm writing some shell scripts with haskell, which I'm running in gitbash, but there are a few other existing scripts I'd like to be able to use from those scripts.
For example, I'd like to run maven goals or do a git pull, but without having to integrate specifically with those tools.
Is there a way to do this?
You can use System.Process.
For example, executing seq 1 10 shell command:
> import System.Process
> readProcess "seq" ["1", "10"] ""
"1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n"
it :: String
> readProcessWithExitCode "seq" ["1", "10"] ""
(ExitSuccess,"1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n","")
it :: (GHC.IO.Exception.ExitCode, String, String)
Yes, it is possible. You can use process package, which exports many useful functions. Simplest one is System.Cmd.system, which can run some application in shell, yielding exit code.
More advanced features are provided too in the System.Process module. With this module you can run process and communicate with it in many ways (input piping, exit codes, waiting for process to stop, modify its environment etc).
Of course. You can start by using system to invoke external processes.
More sophisticated piping and process control is available in a cross-platform way from the System.Process library.
Finally, you can consider porting your shell scripts to Haskell, via shell DSLs.
Turtle is pretty nice modern haskell library for this.

How to use quicklisp when CL program is invoked as a shell script?

I am currently have a small program in Common Lisp, which I want to run as a shell script. I am using the SBCL and perfectly fine with this so will prefer to stay on this platform. :)
I am aware about the --script option and it works flawlessly except for (ql:quickload) form.
My program uses the CL-FAD, which loads through ql:quickload (I think I should mention that it is package-loading function from quicklisp). When script runs up to evaluating the
(ql:quickload :cl-fad)
form, it breaks with the next error:
package "QL" not found
Program is packed in the single source file, which has following header:
(defpackage :my-package
(:use :common-lisp)
(:export :my-main-method))
It is simple automation executable, so I decided (maybe erroneously) not to write any ASDF system. It exports single function which should be run without any arguments.
For this program I am currently trying to write the launcher script, and this is what I am staring at:
#!/usr/bin/sbcl --script
(load "my-program.lisp")
(in-package :my-package)
(my-main-method)
This three lines (not counting the shebang) is what I am want to automate. As I read in documentation, script with this shebang can be called as simple ./script.lisp, and it really does this... with the error described before.
What I need to add in the launcher for :cl-fad to load properly? Documentation states that with --script option SBCL doesn't load any init file, so do I really need to copypaste the lines
#-quicklisp
(let ((quicklisp-init (merge-pathnames "systems/quicklisp/setup.lisp"
(user-homedir-pathname))))
(when (probe-file quicklisp-init)
(load quicklisp-init)))
(which ql:add-to-init-file adds to .sbclrc), to my launcher script?
Maybe I have some deep architectural flaw in my program setup?
And yes, when I enter the lines which I try to automate in REPL in the sbcl itself, program runs as expected.
You are doing everything right.
Basically, before you can use quicklisp, you need to load it (currently, it's not bundled with SBCL, although it may change in the future). There are various ways to do it. For example, you can load your .sbclrc with the quicklisp init:
#!/usr/bin/sbcl --script
(load ".sbclrc")
(load "my-program.lisp")
(in-package :my-package)
(my-main-method)
or just paste those lines in your script, like you have suggested.
Creating a dedicated version of core image is a good option. You may:
load quicklisp and sb-ext:save-lisp-and-die in a new image. You write a shell/bat script named, say qlsbcl, like this:
sbcl --core <my-new-image-full-path-location> "$#"
grab clbuild2 at http://gitorious.org/clbuild2 and run clbuild lisp. You'll have to symlink clbuild to a binary directory in your path and tweak some scripts a bit if your quicklisp is not in the common place ~/quicklisp (https://gist.github.com/1485836) or if you use ASDF2 (https://gist.github.com/1621825). By doing so, clbuild create a new core with quicklisp, ASDF and anything you may add in conf.lisp. Now the shebang may look like this:
#!/usr/bin/env sbcl --noinform --core <my-clbuild-install-directory>/sbcl-base.core --script
The advantage of clbuild is that you may easily create and manage core and quicklisp installation from shell for sbcl (by default) or any other modern CL like ccl64 implementation. Mixing the two techniques (script and clbuild) will solve your problem.

How to call bash commands from tcl script?

Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.

Resources