How to invoke bash or shell scripts from a haskell program? - shell

I'm writing some shell scripts with haskell, which I'm running in gitbash, but there are a few other existing scripts I'd like to be able to use from those scripts.
For example, I'd like to run maven goals or do a git pull, but without having to integrate specifically with those tools.
Is there a way to do this?

You can use System.Process.
For example, executing seq 1 10 shell command:
> import System.Process
> readProcess "seq" ["1", "10"] ""
"1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n"
it :: String
> readProcessWithExitCode "seq" ["1", "10"] ""
(ExitSuccess,"1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n","")
it :: (GHC.IO.Exception.ExitCode, String, String)

Yes, it is possible. You can use process package, which exports many useful functions. Simplest one is System.Cmd.system, which can run some application in shell, yielding exit code.
More advanced features are provided too in the System.Process module. With this module you can run process and communicate with it in many ways (input piping, exit codes, waiting for process to stop, modify its environment etc).

Of course. You can start by using system to invoke external processes.
More sophisticated piping and process control is available in a cross-platform way from the System.Process library.
Finally, you can consider porting your shell scripts to Haskell, via shell DSLs.

Turtle is pretty nice modern haskell library for this.

Related

LLDB: How to define a function with arguments in .lldbinit?

I would like write a helper function which to be available in my LLDB session. (I am not talking about python here)
This function will invoke methods of current program variables and then pass them to a python script.
I guess i understand how to write a python script, but i am still not sure how to write an lldb-script which interacts with my program.
For a general intro on how to use the lldb Python module to interact with your program, see:
https://lldb.llvm.org/use/python-reference.html
That will show you some different ways you can use Python in lldb, and particularly how to make Python based commands and load them into the lldb command interpreter.
There are a variety of example scripts that you can look at here:
https://github.com/llvm/llvm-project/tree/main/lldb/examples/python
There's an on-line version of the Python API help here:
https://lldb.llvm.org/python_api.html
and you can access the same information from within lldb by doing:
(lldb) script
Python Interactive Interpreter. To exit, type 'quit()', 'exit()' or Ctrl-D.
>>> help(lldb)
Help on package lldb:
NAME
lldb
FILE
/Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources/Python/lldb/__init__.py
DESCRIPTION
...

Execute a shell command

I want to execute a shell command in Rust. In Python I can do this:
import os
cmd = r'echo "test" >> ~/test.txt'
os.system(cmd)
But Rust only has std::process::Command. How can I execute a shell command like cd xxx && touch abc.txt?
Everybody is looking for:
use std::process::Command;
fn main() {
let output = Command::new("echo")
.arg("Hello world")
.output()
.expect("Failed to execute command");
assert_eq!(b"Hello world\n", output.stdout.as_slice());
}
For more information and examples, see the docs.
You wanted to simulate &&. std::process::Command has a status method that returns a Result<T> and Result implements and_then. You can use and_then like a && but in more safe Rust way :)
You should really avoid system. What it does depends on what shell is in use and what operating system you're on (your example almost certainly won't do what you expect on Windows).
If you really, desperately need to invoke some commands with a shell, you can do marginally better by just executing the shell directly (like using the -c switch for bash).
If, for some reason, the above isn't feasible and you can guarantee your program will only run on systems where the shell in question is available and users will not be running anything else...
...then you can just use the system call from libc just as you would from regular C. This counts as FFI, so you'll probably want to look at std::ffi::CStr.
For anyone looking for a way to set the current directory for the subprocess running the command i. e. run "ls" in some dir there's Command::current_dir. Usage:
use std::process::Command;
Command::new("ls")
.current_dir("/bin")
.spawn()
.expect("ls command failed to start");

Redirection of stdout to stdin in other languages that BASH when you fork or execute a program

I need to code a program which execute several subprograms, using pipes and redirection. So since I need to use some data structures and functions I want to avoid use Bash for that matter. So far, I know how to use redirection in bash without any problem.
$ ./myprogram -f 1 -h -l < mysource.dat | myprogram2
Some my question are the next:
how can you code this line in other language such as C, Python, Perl, Java or Ruby, avoiding system ("....")?
Also, I wonder which language provide the best process managements tools?
Answer to question 1 in Python:
from subprocess import Popen, PIPE
f=file("mysource.dat")
p1 = Popen(["./myprogram","-f","1","-h"."-l"], stdin=f, stdout=PIPE)
f.close()
p2 = Popen(["myprogram2"], stdin=p1.stdout)
p1.stdout.close()
p1.wait()
p2.wait()
More information at the subprocess module documentation.
Regarding question 2, Java isn't likely to be the best option. C will allow you to use pipe, fork, dup2, and exec system calls to do everything, but involves some more typing. The others are most likely equally good, depending on your personal scripting preferences.

How to call bash commands from tcl script?

Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.

Package bash script in "executable" for double-click execution (ideally platform independent)?

I wrote a number of bash scripts that greatly simplify the routine, but very tedious, file manipulation that my group does.
Unfortunately, most in my group cannot open a terminal, let alone run scripts with complex arguments.
Is there a way to nicely package a bash script into an executable (that accepts arguments) that runs nicely on multiple computer platforms?
I run Mac OS X, but many of my colleagues run Windows (which can run bash scripts via Cygwin, etc.). I am aware of Platypus, but is there an equivalent for Windows?
I do not know if it meets all of your requirements but I use makeself wich is really great to package things. It works with cygwin, so it might fill in your needs ^^
Basically, when you create a makeself archive, you give a script that will be executed when the archive is "launched". This script get all the parameters given to the archive (whatever you want) :
makeself.sh ${dir_to_archive} ${name_of_archive} ${description} ${startup_script}
When you run the auto-extractible archive, you do :
my_archive.run ${param1} ${param2} ${paramN}
It will uncompress your archive and run :
${startup_script} ${param1} ${param2} ${paramN}
my2c

Resources