Guile scheme system* with pdflatex -jobname - scheme

I'm trying to call pdflatex from a guile scheme file. This is the Guile command I'm using:
(system*
"cat" "foo.txt" "|" "pdflatex" "-jobname" "\"bar\"")
This is the error I get back after running the file:
cat: invalid option -- 'j'
Try 'cat --help' for more information.
If I run the command from bash shell it runs normally.
cat foo.txt | pdflatex -jobname "bar"
-jobname is the correct command for pdflatex, but system* seems to have a problem with it.
I'm using (GNU Guile) 2.2.4 and pdfTeX 3.14159265-2.6-1.40.20.

Use system, not system*. It takes a single string as the argument, and executes it using the shell, which will perform the desired piping.
(system "cat foo.txt | pdflatex -jobname 'bar'")
system* doesn't use the shell. As the manual explains:
system* is similar to system, but accepts only one string per-argument, and performs no shell interpretation. The command is executed using fork and execlp. Accordingly this function may be safer than system in situations where shell interpretation is not required.
Note that your command is a Useless use of cat since pdflatex takes the filename as an argument. You could use system* to execute it directly.
(system* "pdflatex" "-jobname" "bar" "foo.txt")
Also, you don't need to add extra quotes around bar when you use system*; since it doesn't use the shell, it doesn't parse special characters.

Related

Pass output of a bash script as command line argument for another script

Beginner in bash and makefiles here. I have a course where we need to create a makefile where each rule calls one of the already compiled programs. All of the compiled programs take a command line argument. As the arguments can be quite large and mostly consists of the same character in a row (for example AAAAAAA) I made a script that uses python to print the argument. Example:
#!/bin/bash
python -c 'print "A"*1000 + "Q"*200'
I am wondering how to create the rule in the makefile so that the output of the above script will be passed as the command line argument. Essentially like this:
test:
./schoolprogram ./myprogram.sh
So when make test is executed then ./schoolprogram should be run with the argument 1000 A's followed by 200 Q's and not the literal string "./myprogram.sh".
I don't know why you have a script that does nothing but invoke python; why not just run python directly?
In any event, this isn't really a makefile question it's mostly a shell question. When writing makefile recipes, the best way is to get the command you want to run working at your shell prompt, then take that same command (with one small tweak) and put it into your makefile.
In the shell, you can use either $(...) or backticks (an older style) to run a command and replace it with the output of that command. So you can run this:
$ ./schoolprogram $(./myprogram.sh)
or more easily:
$ ./schoolprogram $(python -c 'print "A"*1000 + "Q"*200')
Now when you take a command and put it into a makefile recipe, the thing you have to remember is that the $ character is special to make, so if you want a $ to be passed to your command you have to escape it by doubling it, $$. So you'd write:
test:
./schoolprogram $$(./myprogram.sh)
or equivalently:
test:
./schoolprogram $$(python -c 'print "A"*1000 + "Q"*200')

Using the Bash autocompletion of another command

When I create a command that wraps an existing command with some sugar, I'd like the new command to support the autocompletion of the original command. Is there a way to tell Bash to reuse the autocompletion script of another command?
Silly example:
cat > ~/ls-on-steroids.sh <<EOF
echo "Here are some goodies!"
ls "$#"
EOF
chmod +x ~/ls-on-steroids.sh
Now, how do I configure my new script such that when I type:
~/ls-on-steroids.sh <TAB><TAB>
I'd like the same behavior as with:
ls <TAB><TAB>
Preferably in a portable, repeatable manner, without having to manually track down the location of ls's autocomplete script.
You have to configure it manually, but it's relatively simple to copy completions from an existing command. First, run complete -p ls to see what, if any, command was defined for ls. If nothing comes up, ls doesn't use any special completions. You are probably expecting to see something like the following as the output, though
complete -o default -F _longopt ls
which says that the function _ls is called to generate completions for the command ls, and if the doesn't generate any results, use the bash default completion. You can apply the same function to ls_on_steroids by simply running
complete -o default -F _longopt ls_on_steroids
(i.e., replace ls with ls_on_steroids as the final argument in the command printed by complete -p).

CMake's execute_process and arbitrary shell scripts

CMake's execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell. The thing is, I want to use pipes, file descriptor redirection, etc. - and that does not seem to be possible. The alternative would be very painful for me (I think)...
What should I do?
PS - CMake 2.8 and 3.x answer(s) are interesting.
You can execute any shell script, using your shell's support for taking in a script within a string argument.
Example:
execute_process(
COMMAND bash "-c" "echo -n hello | sed 's/hello/world/;'"
OUTPUT_VARIABLE FOO
)
will result in FOO containing world.
Of course, you would need to escape quotes and backslashes with care. Also remember that running bash would only work on platforms which have bash - i.e. it won't work on Windows.
execute_process command seems to only let you, well, execute a process - not an arbitrary line you could feed a command shell.
Yes, exactly this is written in documentation for that command:
All arguments are passed VERBATIM to the child process. No intermediate shell is used, so shell operators such as > are treated as normal arguments.
I want to use pipes
Different COMMAND within same execute_process invocation are actually piped:
Runs the given sequence of one or more commands with the standard output of each process piped to the standard input of the next.
file descriptor redirection, etc. - and that does not seem to be possible.
For complex things just prepare separate shell script and run it using execute_process. You can pass variables from CMake to this script using its parameters, or with prelimiary configure_file.
I needed to pipe two commands one after the other and actually learned that each COMMAND of the execute_process is piped already. So at least that much is resolved by simply adding commands one after the other:
execute_process(
COMMAND echo "Hello"
COMMAND sed -e 's/H/h/'
OUTPUT_VARIABLE GREETINGS
OUTPUT_STRIP_TRAILING_WHITESPACE)
Now the variable GREETINGS is set to hello.
If you indeed need a lot of file redirection (as you stated), you probably want to write an external script and then execute that script from CMakeLists.txt. It's really difficult to get all the escaping right in CMake.
If you can simplify your scripts to one command generating a file, then another handling that file, etc. then you can always use the INPUT_FILE and OUTPUT_FILE options. Or pass a filename to your command for the input.
It's often much cleaner to handle one file at a time. Although I understand that some commands may need multiple sources and destinations.

Difference between typing a shell command, and save it to a file and using `cat myfile` to execute it?

I have an rsync command that works as expected when I type it directly into a terminal. The command includes several --include='blah' and --exclude='foo' type arguments. However, if I save that command to a one-line file called "myfile" and I try `cat myfile` (or, equivalently $(cat myfile)), the rsync command behaves differently.
I'm sure it is the exact same command in both cases.
Is this behavior expected/explainable?
I've found the answer to this question. The point is that the cat command takes the contents of the file and treats it like a string. Any string operators (like the escape operator, ) are executed. Then, the final string output is what is passed to a command via the backticks.
As a solution, I've just made "myfile" a shell script that I can execute rather than trying to use cat.

run the output of a script as a standalone bash command

suppose you have a perl script "foobar.pl" that prints the following to stdout
date -R
and you want to run whatever that perl script outputs as a standalone bash command (don't worry about security problems as this is running in a trusted environment).
How do you get bash to recognize this as a standalone command?
I've tried using xargs, but that seems to want to pass arguments only to a pre-defined command.
I want the perl script to be able to output any arbitrary command.
$command = 'date -R'
system($command); ## in the perl script
the above does not work because I want it to run in an existing cygwin environment ...
foobar.pl | xargs bash -i {}
the above does not work because bash seems to be running a new process and thus the initialization and settings from bash_profile don't get instantiated.
`foobar.pl`
Bad:
`perl foo.pl`
$(perl foo.pl)
Why is this bad? Because of so many reasons; most notably:
Wordsplitting: What you're doing here is taking the output of the perl script, splitting it into chunks wherever there are spaces, tabs or newlines, and taking those chunks as arguments to the first chunk which is the command to run. In really extremely simplistic cases like $(echo 'date +%s') it might work; but that's just a really bad representation of what you're REALLY doing here.
You cannot do quoting or use any other bash shell features like parameter expansion, bash keywords, etc.
Good, but inconvenient:
perl foo.pl > mytmpfile; bash mytmpfile
Creating a temporary file to put your perl script's output into and then running that with bash works, but it's inconvenient as you need to create (and clean up!) your temporary file and have it in a portably writable (and secure!) location.
Also remember not to use . or source to execute the temporary file unless you really intend to run it all in the active shell. Moreover, when you use . or source, you won't be able to reliably clean up your temporary file afterward.
Probably the best solution:
perl foo.pl | bash
This is pretty safe all-round ("safe" in the context of, least bug-prone) assuming your perl script outputs correct bash syntax, of course.
Alternatives that do pretty much the same thing:
bash < <(perl foo.pl)
bash <(perl foo.pl)
Given the perl file:
print "date";
the following bash command will do it.
> $(perl qq.pl)
Mon Apr 6 11:02:07 WAST 2009
But that is run in a separate shell. If you really want to invoke it in the context of the current shell, do this:
$ perl qq.pl >/tmp/qq.$$ ; . /tmp/qq.$$ ; rm -f /tmp/qq.$$
Mon Apr 6 11:04:59 WAST 2009
Try:
foobar.pl | bash
I don't think this is exactly what you're looking for, but its what I've got :-)
perl foo.pl > /tmp/$$.script; bash /tmp/$$.script; rm /tmp/$$.script
Good luck!
Try with open($fh,"-|",$arg1,$arg2)

Resources