I have several (bash) scripts that are run both individually and in sequence. Let's call them one, two, and three. They take awhile to run, so since we frequently run them in order, I'm writing a wrapper script to simply call them in order.
I'm not running into any problems, per se, but I'm realizing how brittle this is. For example:
script two has a -e argument for the user to specify an email address to send errors to.
script three has a -t argument for the same thing.
script one's -e argument means something else
My wrapper script basically parses the union of all the arguments of the three subscripts, and "does the right thing." (i.e. it has its own args - say -e for email, and it passes its value to the -e arg to the second script but to the -t arg for the third).
My problem is that these scripts are now so tightly coupled - for example, someone comes along, looks at scripts two and three, and says "oh, we should use the same argument for email address", and changes the -t to a -e in script three. Script three works fine on its own but now the wrapper script is broken.
What would you do in this situation? I have some big warnings in the comments in each script, but this bothers me. The only other thing I can think of is to have one huge monolithic script, which I'm obviously not crazy about either.
The problem seems to be that people are thoughtlessly changing the API of the underlying scripts. You can’t go around arbitrarily changing an API that you know others are depending on. After all, it might not just be this wrapper script that expects script #3 to take a -t argument. So the answer seems to be: stop changing the underlying scripts.
Related
(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.
I've got automated package configuration test that should also run eula and some other shell user interactions, all y/n style stuff.
I've googled this a quite bit now, however I haven't found anything quite useful yet. Also the parametrized build doesn't seem to be the resolution either (what little I understand about Jenkins). So far I've used free style project with several execute shell steps.
I've tried now like this:
#!/bin/bash
/path/to/my/script.sh < input.txt
with input.txt containing few echos containing " \n"(to scroll down the eula) and "y" (to accept the eula).
Problem is that script in question calls second script that should be the recepient of the input. But now the whole input.txt is outputted before the eula starts and therefore it's not handled.
Is there better way of handling this sort of user input situations? My Jenkins experience is rather limited.
I'm trying to automate the interaction with a remote device over telnet using expect.
At some point device generates output like this:
;
...
COMPLETED
...
;
What I need is to make my script exit after the "COMPLETED" keyword and second ";" are found. However all my attemts fail. Script either exits after the first coma or does not exit at all, hanging. Please help.
Expect works.
I make a point of that, because facha has already written "That [presumably the updated script, rather than Expect itself] didn't work" once. Expect has very few faults--but it's so unfamiliar to most programmers and administrators that it can be hard to discern exactly how to talk to it. Glenn's advice to
expect -re {COMPLETE.+;}
and
exp_internal 1
(or -d on the command line, or so on) is perfectly on-target: from everything I know, those are exactly the first two steps to take in this situation.
I'll speculate a bit: from the evidence provided so far, I wonder whether the expect matches truly even get to the COMPLETE segment. Also, be aware that, if the device to which one is telnetting is sufficiently squirrelly, even something as innocent-looking as "COMPLETE" might actually embed control characters. Your only hopes in such cases are to resort to debugging techniques like exp_internal, or autoexpect.
How about: expect -re {COMPLETED.+;}
I sweated over the question above. The answer I'm going to supply took me a while to piece together, but it still seems hopelessly primitive and hacky compared to what one could do were completion to be redesigned to be less staticky. I'm almost afraid to ask if there's some good reason that completion logic seems to be completely divorced from the program it's completing for.
I wrote a command line library (can be seen in scala trunk) which lets you flip a switch to have a "--bash" option. If you run
./program --bash
It calculates the completion file, writes it out to a tempfile, and echoes
. /path/to/temp/file
to the console. The result is that you can use backticks like so:
`./program --bash`
and you will have completion for "program" in the current shell since it will source the tempfile.
For a concrete example: check out scala trunk and run test/partest.
Consider I have a program that needs an environment set. It is in Perl and I want to modify the environment (to search for libraries a special spot).
Every time I mess with the the standard way to do things in UNIX I pay a heavy price and I pay a penalty in flexibility.
I know that by using a simple shell script I will inject an additional process into the process tree. Any process accessing its own process tree might be thrown for a little bit of a loop.
Anything recursive to a nontrivial way would need to defend against multiple expansions of the environment.
Anything resembling being in a pipe of programs (or closing and opening STDIN, STDOUT, or STDERR) is my biggest area of concern.
What am I doing to myself?
What am I doing to myself?
Getting yourself all het up over nothing?
Wrapping a program in a shell script in order to set up the environment is actually quite standard and the risk is pretty minimal unless you're trying to do something really weird.
If you're really concerned about having one more process around — and UNIX processes are very cheap, by design — then use the exec keyword, which instead of forking a new process, simply exec's a new executable in place of the current one. So, where you might have had
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
perl myprog.pl
You'd just say
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
exec perl myprog.pl
and the spare process goes away.
This trick, however, is almost never worth the bother; the one counter-example is that if you can't change your default shell, it's useful to say
$ exec zsh
in place of just running the shell, because then you get the expected behavior for process control and so forth.