I have an expect script that logs in to an SBC and runs a command for a particular interface.
I call this script from a shell script to perform the same command on multiple SBCs and multiple interfaces. I run the script 6 times on each SBC grabbing details for a single interface each time and the output gets saved to a different file on a per SBC/interface combination.
Trouble is, I run it for example on SBC A and in two of the files the command is truncated and nothing happens. Say interface 2 and 3.
If I run the script again, 5 interfaces work this time and now a different interface, interface 4 fails with a truncated command.
I don’t understand what would cause the command to fail randomly. Any thoughts would be appreciated.
Ok think I have cracked it. Occasionally the command I am entering is matching the expected prompt. In reality the command should always match the prompt so strange it doesn’t fail every time.
Have tweaked the expected prompt and re-running script.
Related
I need to consecutively execute 300 or 600 or more cURL POST requests. I have them in plain text, and I simply copy-paste into the terminal.
But regardless which one I use, Cygwin or Git on Windows, the following occurs:
If I paste 300 into the terminal, it'll consecutively execute around 104, and then it'll start hanging. Can't stop it or type anything, a complete freeze that lasts forever.
But if I paste 200 into the terminal, all will be consecutively finished with success.
Not sure which details I should provide that could prove to be the bottleneck here, so for a start I would only say that ONE command contains ~1270 characters.
Be kind to provide a solution to be able to consecutively execute even 2000 such cURL POST requests with "one paste" into terminal.
Be kind to provide a solution to be able to consecutively execute even 2000 such cURL POST requests with "one paste" into terminal.
I would do:
one paste in an editor, to save those calls as a script
one call to that script in the shell.
That way, you can accommodate any number of call in your bash (Git or Cygwin) session, by executing one script which was filled with "one paste".
The OP confirms however that the issue persists, which could be linked to curl itself, as in curl/curl issue 5784: "curl stops execution/"hangs" after random time" (Aug. 2020).
However, that issue just got closed (Nov. 2020), with Daniel Stenberg commenting:
I don't see anything for us to act on in this issue.
It's not even clear that this is curl's fault. And nothing has been added to the case in months.
Because of all this, I'm closing.
(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.
I am creating a browser-based application for interacting with my local terminal, and I'm using the script command in the terminal (actually in a Java process) to feed input and capture output to display in the browser.
I have found that the virtual terminal inside the script process has 80 columns by default, and when a line of input exceeds that number, I see unexpected behavior. When it reaches the length limit, it adds a space, then a return, then continues with the line. For example, if I input the following:
user_name ~/my/current/directory$ ls /some/really/long/path/to/some/directory/somewhere
The following is what is actually transcribed (and forwarded to my browser app):
user_name ~/my/current/directory$ ls /some/really/long/path/to/some/directory/so ^Mmewhere
Notice the so ^Mmewhere instead of somewhere.
When I hit the up arrow after that to retrieve the last input, I get something different:
user_name ~/my/current/directory$ ls /some/really/long/path/to/some/directory/som^Mmewhere
This time, instead of the extra space character, it's duplicating the m: som^Mmewhere.
What is going on here? Is it that the script command is really intended to produce text output for a human reader, and therefore favors visual formatting over executability? Is there an alternative to script that would work better for my purpose?
Edit
I should have mentioned that I'm doing this in macOS Sierra. Not sure if that makes a difference.
How can I capture the input/ output from a script in realtime (such as with tee), but line-by-line instead of character-by-character? My goal is to capture the input typed into the interactive prompts of a script only after backspaces and auto-completion have finished processing (after the RETURN key is hit).
Specifically, I am trying to create a wrapper script for ssh that creates a timestamped log of commands used on remote servers. The script, which uses tee to redirect the output for filtering, works well, but the redirected output gets jumbled with unsubmitted characters whenever I use the backspace key or the up/down keys to scroll through my remote history. For example: service test stopexitservice test stopart or cd ..logs[1Pls -al.
Perhaps there is a way to capture the terminal's scrollback and redirect that like with tee?
Update: I have found a character-based cleanup solution that does what I want most of the time. However, I am still hoping for an answer to this question (which may well be msw's answer that it is very difficult to do).
In the Unix world there are two primary modes of handling keyboard input. These are known as 'raw' in which characters are passed from the terminal to the reading program one at a time. This is the mode that editors (and such) will use because the editor needs to respond immediately when you press a key.
The other terminal discipline is called 'cooked' which is the line by line behavior that you think of as the bash line by line input where you get to backspace and the command is not executed until you press return. Ssh has to take your input in raw, character-by-character mode because it has no idea what is running on the other side. For example, if you are running an editor on the far side, it can't wait for a return before sending the key-press. So, as some have suggested, grabbing shell history on the far side is the only reasonable way to get a command-by-command record of the bash commands you typed.
I oversimplified for clarity; actually most installations of bash take input in raw mode because they allow editor like command modification. For example, Ctrl-P scrolls up the command history or Ctrl-A goes to the beginning of the line. And bash needs to be able to get those keys the moment they are typed not waiting for a return.
This is another reason that capturing on the local side is obnoxiously difficult: if you capture on the local side, the stream will be filled with Backspaces and all of bash's editing commands. To get a true transcript of what the remote shell actually executed you have to parse the character stream as if you were the remote shell. There also a problem if you run something like
vi /some_file/which_is_on_the_remote/machine
the input stream to the local ssh will be filled with movement commands snippets of text including backspaces and so on and it would be bloody difficult to figure out what is part of a bash command and what is you talking to the editor.
Few things involving computers are impossible; getting clean input from the local side of an ssh invocation is really, really hard.
I question the actual utility of recording the commands that you execute on a local or remote machine. The reason is that there is so much state which is not visible from a command log. As a simple example here's a log of two commands:
17:00$ cp important_file important_file.bak
17:15$ rm important_file
and two days later you are trying to figure out whether important_file.bak should have the contents you intended or not. Given that log you can't answer that simple question. Even if you had the sequence
16:58$ cat important_file
17:00$ cp important_file important_file.bak
17:15$ rm important_file
If you aren't capturing the output, the cat in the log will not tell you anything. Give me almost any command sequence and I can envision a scenario in which it will not give you the information you need to make sense of what was done.
For a very similar purpose I use GNU screen which offer the option to record everything you do in a shell session (INPUT/OUTPUT). The log it creates also comes with undesirable characters but I clean them with perl:
perl -ne 's/\x1b[[()=][;?0-9]*[0-9A-Za-z]?//g;s/\r//g;s/\007//g;print' < screenlog.0
I hope this helps.
Some features of screen:
http://speaking-my-language.blogspot.com/2010/09/top-5-underused-gnu-screen-features.html
Site I found the perl-oneliner:
https://superuser.com/questions/99128/removing-the-escape-characters-from-gnu-screens-screenlog-n
I have a program that runs in the command line (i.e. $ run program starts up a prompt) that runs mathematical calculations. It has it's own prompt that takes in text input and responds back through standard-out/error (or creates a separate x-window if needed, but this can be disabled). Sometimes I would like to send it small input, and other times I send in a large text file filled with a series of input on each line. This program takes a lot of resources and also has a large startup time, so it would be best to only have one instance of it running at a time. I could keep open the program-prompt and supply the input this way, or I can send the process with an exit command (to leave prompt) which just prints the output. The problem with sending the request with an exit command is that the program must startup each time (slow ...). Furthermore, the output of this program is sometimes cryptic and it would be helpful to filter the output in some way (eg. simplify output, apply ANSI colors, etc).
This all makes me want to put some 2-way IO filter (or is that "pipe"? or "wrapper"?) around the program so that the program can run in the background as single process. I would then communicate with it without having to restart. I would also like to have this all while filtering the output to be more user friendly. I have been looking all over for ideas and I am stumped at how to accomplish this in some simple shell accessible manor.
Some things I have tried were redirecting stdin and stdout to files, but the program hangs (doesn't quit) and only reads the file once making me unable to continue communication. I think this was because the prompt is waiting for some user input after the EOF. I thought that this could be setup as a local server, but I am uncertain how to begin accomplishing that.
I would love to find some simple way to accomplish this. Additionally, if you can think of a way to perform this, do you think there is a way to also allow for attaching or detaching to the prompt by request? Any help and ideas would be greatly appreciated.
You could create two named pipes (man mkfifo) and redirect input and output:
myprog < fifoin > fifoout
Then you could open new terminal windows and do this in one:
cat > fifoin
And this in the other:
cat < fifoout
(Or use tee to save the input/output as well.)
To dump a large input file into the program, use:
cat myfile > fifoin