lang/expect: the expect fails every few runs of the kvm - expect

I'm using expect to control kvm / qemu, and I'm having an issue that re-running the very same script produces different results, with sometimes the code failing to work as expected, even though the output I see from within kvm is exactly the same between the runs (using -display curses of kvm). I've tried passing -d to expect, and it appears that, (1), the buffer that expect is working with never gets cleared of the old stuff (e.g., stuff that preceded the stuff that the prior expect "…" was supposed to have matched), and, (2), the spaces don't really work as spaces.
Below is an example of "login: " showing up in the terminal, but never matching the expect "login: ", I expect it has something to do with colour, but I'm not too sure what's the best way to turn it off from kvm:
DragonFly/x86_64 (Amnesiac) (ttyv0)\u001b[38;74H\u001b[28X\u001b[39dlogin:\u001b[74G\u001b(B\u001b[m\u001b[39;49m\u001b[37m\u001b[40m\u001b[15;36r\u001b[36;1H\u001b[7S\u001b[1;54r\u001b[31;74H\u001b[37m\u001b[40mFri Feb 16 05:29:19 UTC 2018\u001b[32;74HWelcome to DragonFly!\u001b[34;74HTo start the installer, login as 'installer'. To just get a shell prompt,\u001b[35;74Hlogin as 'root'.\u001b[39;81H\u001b[30m\u001b[47m \u001b[39;74H\u001b(B\u001b[m\u001b[39;49m\u001b[37m\u001b[40m" (spawn_id exp6) match glob pattern "login: "? no

You can try expect login: instead of expect "login: ".

Related

What is the best practice for a bash script running psql commands in the background?

I have been testing the connectivity and stability of a product, and part of my testing has been to open up 8 terminal windows, and fire off a script that uses psql to query a remote database 10 times, grabbing 50k rows per query. I have alias'd the command because I time it, log the results to another file, etc. Right here I will admit I am not sure if this is good practice or not, I'm somewhat new to bash profiles and the rest. It was getting pretty annoying to click through the 8 windows (all inside one larger window), and so I thought I would try using "&" to just fire it off 10 times in the background. This has proven to be problematic, and far less successful than manually telling 8 windows to fire up the script. Mainly, the window where I'm doing "& & & & etc" never "returns", and I have to CTRL-C to get back to prompt. Additionally, I get a lot more server errors from psql. Here are the two commands I'm running, somewhat obfuscated and abbreviated:
test="(time bash ~/Documents/some/other/folders/myPsql.sh) >> \
~/Documents/some/stuff/logfile.txt 2>&1 && echo done"
shortcut="test & test & test & test & test & test & test & test & test & test"
Here is the psql script which works fine when ran via "test" command above:
psql << EOF
\pset pager off
\pset timing on
\copy (select * from sometable limit 50000) to '~/Documents/some/folder/file.csv' csv;
\q
EOF
I am fairly new to a lot of the moving parts at work here, and so I recognize that something I am doing might be fundamentally flawed in some way.
Is there any "good"/better way to make my "shortcut" command above more successful?
EDIT: This was the error I was referring to:
psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
I'm using iTerm2 on Mac, and psql is "talking to" a local software client, which is interacting with other software on Predix (CloudFoundry) to query a Postgres db, also on Predix.
test is a standardized shell command. Overwriting it with your own names is going to break rather a lot of scripts/functions/etc.
psqlTest() {
# DANGER: This filename can be used for SQL injection attacks. Keep it under control.
local outFile=${1:-~/Documents/some/folder/file.csv}
psql <<EOF
\\pset pager off
\\pset timing on
\\copy (select * from sometable limit 50000) to '$outFile' csv;
\\q
EOF
}
parallelPsqlTest() {
local count=${1:-10} # if not given a number, start 10 versions of our test
for ((i=0; i<10; i++)); do
psqlTest &
done
}
Some notes:
Don't store commands in strings. See BashFAQ #50.
~ is meaningful to the shell, but not to most other programs. Thus, you want to have it expanded (replaced with /home/whatever or /Users/whatever) before the shell starts whichever software a path is being passed to.
Substituting variables into SQL text is a Very Bad Idea. See Bobby Tables. This is true even for filenames -- filenames on UNIX can contain quotes, can contain newlines, and can otherwise have a bunch of contents you probably assume they can't.
Don't use the name test for your own commands: It's the normal name for the command also known as [, specified at http://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html
The backslashes inside the heredoc need to be doubled to ensure that they are passed to psql and not interpreted by the shell itself.

Passing readable punctuated text through ssh commands in macOS?

I need to "say" things to people from the shell and would like to set up an alias that properly and automatically escapes punctuation marks.
I currently have an alias that's working great, except for questions, commas, etc.:
tell_someone='ssh -e none username#hostname say'
But when I run tell_someone "can you hear me?", it returns:
zsh:1: no matches found: me?
While tell_someone "can you hear me\?" works fine. I'd like to make this work with and without quotes if possible, but I need it to obviously escape the punctuation.
Can a simple alias do the job or do I need to resort to writing a script that will handle this in a more robust manner?
NOTE: my rationale is that everyone I work with wears headphones and have multiple monitors and so it's nearly impossible to get attention so I frequently have to restore to more covert means like we used to do in computer lab in college pranks on Sun systems where we used to play audio at people... :)
Try using this function instead of your alias (don't worry; it feels the same as an alias, it just handles arguments in a more flexible way):
tell_someone () { printf "%q" "$*" | ssh -x -e none someone#host say }
Adjust it as per your needs.

How do I send commands to the ADB shell directly from my app?

I want to send commands in the ADB shell itself as if i had done the following in cmd.
>adb shell
shell#:/ <command>
I am using python 3.4 on a windows 7 OS 64bit machine. I can send one-line shell commands simply using subprocess.getoutput such as:
subprocess.getoutput ('adb pull /storage/sdcard0/file.txt')
as long as the adb commands themselves are recognized by ADB specifically, such as pull and push, however there are other commands such as grep that need to be run IN the shell, like above, since they are not recognized by adb. for example, the following line will not work:
subprocess.getoutput ('adb shell ls -l | grep ...')
To enter the commands in the shell I thought I needed some kind of expect library as that is what 'everyone' suggests, however pexpect, wexpect, and winexpect all failed to work. they were written for python 2 and after being ported to python 3 and my going through the .py files by hand, even those tweaked for windows, nothing was working - each of them for different reasons.
how can i send the input i want to the adb shell directly?
If none of the already recommended shortcuts work for you you can still go the 'regular' way using 'subprocess.Popen' for entering commands in the adb shell with Popen:
cmd1 = 'adb shell'
cmd2 = 'ls -l | grep ...'
p = subprocess.Popen(cmd1.split(), stdin=PIPE)
time.sleep(1)
p.stdin.write(cmd2.encode('utf-8'))
p.stdin.write('\n'.encode('utf-8'))
p.stdin.flush()
time.sleep(3)
p.kill()
Some things to remember:
even though you import subprocess you still need to invoke subprocess.Popen
sending cmd1 as a string or as items in a list should work too but '.split()' does the trick and is easier on the eyes
since you only specidfied you want to enter input to the shell you only need stdin=PIPE. stdout would only be necessary if you wanted to receive output from the shell
time.sleep(1) isn't really necessary, however since many complained about input issues being faster or slower in python 2 vs 3 consider maybe using it. 'they' might have been using versions of 'expect' that need the shell's reply first. this code also worked when i tested it with simply swapping out and in the process with time.sleep(0)
stdin.write will return an error if the input is not encoded properly. python's default is unicode. entering by binary did not work for me in my tests like this "b\ls ..." but .encode() worked. dont forget the endline!
if you use .encode() there is a worry that the line might not get sent properly, so to be sure it might be good to include a flush().
time.sleep(3) is completely uneccesary, but if your command takes a long time to execute (eg a regressive search through the entire device piped out to a txt file on the memory card) maybe give it some extra time before killing anyhting.
remember to kill. if you didnt kill it, the pipe may remain open, and even after exiting the test app on the console the next commend still went to the shell even though the prompt appearsed to be my regular cmd prompt.
Amichai, I have to start with pointing out that your own "solution" is pretty awful. And your explanation makes it even worse. Doing all those unnecessary things just because you do not understand how shell (here I mean your PC's OS shell, not adb) command parsing works.
When all you needed was just this one command:
subprocess.check_output(['adb', 'shell', 'ls /storage/sdcard0 | grep ...']).decode('utf-8')

Ignore Bash Trace in expect

I am try to write regression tests using expect for an interactive bash script.
So far everything works Ok. I spawn the process with the correct arguments, and then send/expect.
I would like, during tests, to enable tracing in the bash script, using the set -x command. However, when doing so, the bash trace output messes with expect.
I would like expect to ignore those lines when performing matching but still output them on either stdout or stderr.
Apperently, there is so way to treat stderr and stdout independently.
I have already tried a few thing using expect_before and expect_background, but none having given me good results.
Any thoughts ?
Thanks.
If the output (I mean non-trace output) from your bash script if well-defined, you can simply ignore the trace output when using expect command. For example, if your script shows:
+ echo 'Password:'
Password:
you can use regexp mode of expect:
expect -re '^Password:'
That would ignore the trace output but match the password prompt. Granted, your match rules should be very tight to not match any undesirable output.

How can I flush the input buffer in an expect script?

I'm writing an Expect script and am having trouble dealing with the shell prompt (on Linux). My Expect script spawns rlogin and the remote system is using ksh. The prompt on the remote system contains the current directory followed by " > " (space greater-than space). A script snippet might be:
send "some command here\r"
expect " > "
This works for simple commands, but things start to go wrong when the command I'm sending exceeds the width of the terminal (or more precisely, what ksh thinks is the width of the terminal). In that case, ksh does some weird horizontal scrolling of the interactive command line, which seems to rewrite the prompt and stick an extra " > " in the output. Naturally this causes the Expect script to get confused and out of sync when there appears to be more than one prompt in the output after executing a command (my script contains several send/expect pairs).
I've tried changing PS1 on the remote system to something more distinctive like "prompt> " but a similar problem arises which indicates to me that's not the right way to solve this.
What I'm thinking might help is the ability for the script to tell Expect that "I know I'm properly synchronised with the remote system at this point, so flush the input buffer now." The expect statement has the -notransfer flag which doesn't discard the input buffer even if the pattern does match, so I think I need the opposite of that.
Are there any other useful techniques that I can use to make the remote shell behave more predictably? I understand that Expect goes through a lot of work to make sure that the spawned session appears to be interactive to the remote system, but I'd rather that some of the more annoying interactive features (such as the horizontal scrolling of ksh) be turned off.
If you want to throw away all output Expect has seen so far, try
expect -re $
This is a regexp match on $ which means the end of the input buffer, so it will just skip everything received so far. More details at the Expect man page.
You could try "set -o multiline" or COLUMNS=1000000 (or some other suitably large value).
I have had difficulty with ksh and Expect in the past. My solution was to use something other than
ksh for a login shell.
If you can change the remote login to other than ksh (using the chsh command or editing /etc/passwd) then you might try this with /bin/sh as the shell.
Another alternative is to tell KSH that the terminal is a dumb terminal - disallow it from doing any special processing.
$ export TERM=""
might do the trick.

Resources