I'm trying to automate the interaction with a remote device over telnet using expect.
At some point device generates output like this:
;
...
COMPLETED
...
;
What I need is to make my script exit after the "COMPLETED" keyword and second ";" are found. However all my attemts fail. Script either exits after the first coma or does not exit at all, hanging. Please help.
Expect works.
I make a point of that, because facha has already written "That [presumably the updated script, rather than Expect itself] didn't work" once. Expect has very few faults--but it's so unfamiliar to most programmers and administrators that it can be hard to discern exactly how to talk to it. Glenn's advice to
expect -re {COMPLETE.+;}
and
exp_internal 1
(or -d on the command line, or so on) is perfectly on-target: from everything I know, those are exactly the first two steps to take in this situation.
I'll speculate a bit: from the evidence provided so far, I wonder whether the expect matches truly even get to the COMPLETE segment. Also, be aware that, if the device to which one is telnetting is sufficiently squirrelly, even something as innocent-looking as "COMPLETE" might actually embed control characters. Your only hopes in such cases are to resort to debugging techniques like exp_internal, or autoexpect.
How about: expect -re {COMPLETED.+;}
Related
I've spotted something that I find very puzzling about the behavior of System.cmd. I just wanted to ask if anyone might have thoughts on what I may be doing wrong or what may be going on.
I've been trying to wrap an Elixir script around the ack programmer's grep. I tried this:
{_message, errlevel} = System.cmd("ack",[])
And I get back the help text that ack displays on an empty command line; I won't bother to reproduce it here because it's not necessarily germane to the question.
Then I try this:
{_message, errlevel} = System.cmd("ack",[""])
And it looks like iex hangs. Now I realize in the first case the output may be going to stderr rather than stdout. But there's another reason I'm asking about this; I found something even more interesting to me. Because I'm not 100% committed to using ack I thought I'd try ripgrep on the thought that it might interact with stdout better.
So if I do this:
{_message, errlevel} = System.cmd("rg",[])
Same as ack with no arguments--shows me the help text. Again I'm guessing it's probably out to stderr. I could check to confirm my assumption but what's even more interesting to me is that when I do this:
{_message, errlevel} = System.cmd("rg",[""])
It hangs again!
I had always figured the issue is with how ack interacts with stdout but now I'm not so sure since I see the same behavior with ripgrep. This is Elixir 1.13.2 on MacOSX 13.1. I've seen this same behavior with older versions of MacOSX.
Any idea how I can get the ack and/or ripgrep process to terminate so I get a response back? I've seen this https://hexdocs.pm/elixir/main/Port.html#module-zombie-operating-system-processes and I can try it but I was hoping for something slightly less hacky, I guess. Any suggestions? Also if I use the :stderr_to_stdout option set to true, it doesn't seem to make any difference.
I've seen this Q & A but I'm not totally clear on how using Task.start_link would help in this case. I mean would one do a Task.start_link on System.cmd?
You are executing a command that expects input on STDIN, but with System.cmd/3, there is no mechanism to provide the input.
Elixir has no way to know the behaviour of the command you are executing, so waits for the process to terminate, which never happens. As José mentioned on the issue Roger Lipscombe raised, this is expected behaviour.
If you want to send the OS process input via STDIN, you need to use Ports. However, there are limitations there too, which I asked about here.
For ack specifically, it reads from STDIN if you don't provide a filename. So you can workaround the limitation by putting the data in a file, and providing the filename as an argument, rather than piping the data via OS streams.
Looks like a bug. I've filed https://github.com/elixir-lang/elixir/issues/12321.
(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.
I have a plugin in nagios "check_icmp" who return 4 values if the command can ping the demanded host however i got only 1 values if the ping failed so i used the following command :
'/usr/lib64/nagios/plugins/check_icmp -w 1000.0,20% -c 1400.0,60% -H 8.8.4.5 -m 5 | sed 's/pl=100%;20;60;0;100/rta=nan;;;; rtmax=nan;;;; rtmin=nan;;;; pl=100%;20;60;0;100/g'
and it returns
CRITICAL - 8.8.4.5: rta nan, lost 100%|rta=nan;;;; rtmax=nan;;;; rtmin=nan;;;; pl=100%;20;60;0;100
instead of
CRITICAL - 8.8.4.5: rta nan, lost 100%|pl=100%;20;60;0;100
so it works great on the host but if i put this command in nagiosql the current status stay in green "OK" even if the ping failed :
https://ibb.co/LCzvXrV
I would guess that the reason that you're getting an OK despite the fact that the result is "CRITICAL" is because the only thing Nagios cares about is the return code of the command you ran (also called an exit code). I would guess that sed has the last word, exits with return code 0 which means "everything went fine", and it did obviously: for sed.
Since you don't explain why you're doing any of this, I can't comment on whether this is a good idea or not, but it seems very odd in my mind to include a pipe to sed in your check command. I understand that you are sanitizing the performance data that is returned by the plugin, but why?
Nagios is built to interpret return codes as the status of your check, that's just the way it is. Whatever issues you are seeing because of this plugins behavior specifically surely can be resolved in another way without running sed magic on your checks that you may or may not get intended results from. It's also worth noting that if you're inserting this into RRD files, messing with the order or amount of values may create headaches, so I really wouldn't recommend it.
As a constructive suggestion in the future, please include what brought you to your current solution when asking these questions, as you can never know whether it's actually the best way to resolve your original issue. This is often called an XY Problem.
I encountered a bash script ending with the exit line. Would anything changes (save scaring users who 'source' rather than calling straight when the terminal closes )?
Note that I am not particularly interested in difference between exit and return. Here I am only interested in differences between having exit without parameters in the end of a bash script (one being closing console or process which sources the script rather than calling).
Could it be to address some less known shell dialects?
There are generally no benefits to doing this. There are only downsides, specifically the inability to source scripts like you say.
You can construct scenarios where it matters, such as having a sourceing script rely on it for termination on errors, or having a self-extracting archive header avoid executing its payload, but these unusual cases should not be the basis for a general guideline.
The one significant advantage is that it gives you explicit control over the return code.
Otherwise the return code of the script is going to be the return code of whatever the last command it executed happened to be. Which may or may not be indicative of the actual success or failure of the script as a whole.
A slightly less significant advantage is that if the last command's exit code is significant, and you follow it up with "exit $?" that tells the maintenance programmer coming along later that yes, you did consider what the exit code of the program should be and he shouldn't monkey with it without understanding why.
Conversely, of course, I wouldn't recommend ending a bash script with an explicit call to exit unless you really mean "ignore all previous exit codes and use this one". Because that's what anyone else looking at your code is going to assume you wanted and they're going to be annoyed that you wasted their time trying to figure out why if you did it just by rote and not for a reason.
I have several (bash) scripts that are run both individually and in sequence. Let's call them one, two, and three. They take awhile to run, so since we frequently run them in order, I'm writing a wrapper script to simply call them in order.
I'm not running into any problems, per se, but I'm realizing how brittle this is. For example:
script two has a -e argument for the user to specify an email address to send errors to.
script three has a -t argument for the same thing.
script one's -e argument means something else
My wrapper script basically parses the union of all the arguments of the three subscripts, and "does the right thing." (i.e. it has its own args - say -e for email, and it passes its value to the -e arg to the second script but to the -t arg for the third).
My problem is that these scripts are now so tightly coupled - for example, someone comes along, looks at scripts two and three, and says "oh, we should use the same argument for email address", and changes the -t to a -e in script three. Script three works fine on its own but now the wrapper script is broken.
What would you do in this situation? I have some big warnings in the comments in each script, but this bothers me. The only other thing I can think of is to have one huge monolithic script, which I'm obviously not crazy about either.
The problem seems to be that people are thoughtlessly changing the API of the underlying scripts. You can’t go around arbitrarily changing an API that you know others are depending on. After all, it might not just be this wrapper script that expects script #3 to take a -t argument. So the answer seems to be: stop changing the underlying scripts.