On Windows, I tested a tcl expect script as followed:
package require Expect
spawn "cmd.exe"
expect ">"
send "echo hello world\r"
But the output printed "F:\Workspace\>", then it exited.
Of cource, I expect that it executes "echo hello world"
Due to the way Expect for Windows works (it uses a special debugging mode) there are certain programs which can't be captured; telnet.exe is one, and cmd.exe could well be another. (The executables concerned have the system bit set in their file flags IIRC.)
Fortunately, the programs that this causes problems for are usually the ones that you don't actually need to automate with Expect. Tcl is quite capable of talking to other machines directly (by opening a socket) and cmd is both often unneeded and (in the other cases) easy to automate by just using the exec command. If this was just a test that was a proxy for your real automation, don't worry too much for now; try to automate the real program, though just do something simple (like exit cleanly) to start out with and build up from there.
It might be better if you tell me the problem you're really trying to solve. But anyway, you just need to type
echo hello world
instead of
send "echo hello world\r"
to get the result you require.
cheers
Brian
Related
I am trying to write a script that opens a command-line application (sagemath in this case) which on start up will send a certain command down the pipe (attach a script) without closing the application at the end.
I tried something like:
#!/bin/bash
echo "load(\"script.sage\")" | sage
This, of course, opens sage load the script print the output of the script and closes sage. Adding & at the end of the last line didn't work.
I know that technically I can add this script to the list of scripts which are loaded on startup always but this is not what I want. I thought that it might be done be making a dynamic link at some directory to my script, but not sure if there is such a directory and where it is.
Any suggestions?
Edit:
I didn't know about Expect (I'm a youngster in linux). Reading about, following Mark's suggestion, it a bit I managed to solve this. If this is of any interest to anyone in the future then this does the trick:
#!/usr/bin/expect
set timeout 20
spawn sage
expect "sage:"
send "load(\"script.sage\")\n"
interact
#!/usr/bin/expect
set timeout 20
spawn sage
expect "sage:"
send "load(\"script.sage\")\n"
interact
You could use 'screen' depending on how dynamically you need this script to run. See http://linux.die.net/man/1/screen for info on how to use screen.
You can either:
Use nohup to start the program E.g., nohup "load(\"script.sage\")" | sage.
Or, you can use the disown command.
I want to send commands in the ADB shell itself as if i had done the following in cmd.
>adb shell
shell#:/ <command>
I am using python 3.4 on a windows 7 OS 64bit machine. I can send one-line shell commands simply using subprocess.getoutput such as:
subprocess.getoutput ('adb pull /storage/sdcard0/file.txt')
as long as the adb commands themselves are recognized by ADB specifically, such as pull and push, however there are other commands such as grep that need to be run IN the shell, like above, since they are not recognized by adb. for example, the following line will not work:
subprocess.getoutput ('adb shell ls -l | grep ...')
To enter the commands in the shell I thought I needed some kind of expect library as that is what 'everyone' suggests, however pexpect, wexpect, and winexpect all failed to work. they were written for python 2 and after being ported to python 3 and my going through the .py files by hand, even those tweaked for windows, nothing was working - each of them for different reasons.
how can i send the input i want to the adb shell directly?
If none of the already recommended shortcuts work for you you can still go the 'regular' way using 'subprocess.Popen' for entering commands in the adb shell with Popen:
cmd1 = 'adb shell'
cmd2 = 'ls -l | grep ...'
p = subprocess.Popen(cmd1.split(), stdin=PIPE)
time.sleep(1)
p.stdin.write(cmd2.encode('utf-8'))
p.stdin.write('\n'.encode('utf-8'))
p.stdin.flush()
time.sleep(3)
p.kill()
Some things to remember:
even though you import subprocess you still need to invoke subprocess.Popen
sending cmd1 as a string or as items in a list should work too but '.split()' does the trick and is easier on the eyes
since you only specidfied you want to enter input to the shell you only need stdin=PIPE. stdout would only be necessary if you wanted to receive output from the shell
time.sleep(1) isn't really necessary, however since many complained about input issues being faster or slower in python 2 vs 3 consider maybe using it. 'they' might have been using versions of 'expect' that need the shell's reply first. this code also worked when i tested it with simply swapping out and in the process with time.sleep(0)
stdin.write will return an error if the input is not encoded properly. python's default is unicode. entering by binary did not work for me in my tests like this "b\ls ..." but .encode() worked. dont forget the endline!
if you use .encode() there is a worry that the line might not get sent properly, so to be sure it might be good to include a flush().
time.sleep(3) is completely uneccesary, but if your command takes a long time to execute (eg a regressive search through the entire device piped out to a txt file on the memory card) maybe give it some extra time before killing anyhting.
remember to kill. if you didnt kill it, the pipe may remain open, and even after exiting the test app on the console the next commend still went to the shell even though the prompt appearsed to be my regular cmd prompt.
Amichai, I have to start with pointing out that your own "solution" is pretty awful. And your explanation makes it even worse. Doing all those unnecessary things just because you do not understand how shell (here I mean your PC's OS shell, not adb) command parsing works.
When all you needed was just this one command:
subprocess.check_output(['adb', 'shell', 'ls /storage/sdcard0 | grep ...']).decode('utf-8')
I don't think that running a process on foreground is any way useful. So I'd like to run all process on background. Is that possible?
Also tell me if there is any problem associated with doing so.
You can adapt the code from this question: https://superuser.com/questions/175799/does-bash-have-a-hook-that-is-run-before-executing-a-command
Basically this uses the DEBUG trap to run a command before whatever you've typed on the command line. So, this:
preexec () { :; }
preexec_invoke_exec () {
[ -n "$COMP_LINE" ] && return # do nothing if completing
[ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # don't cause a preexec for $PROMPT_COMMAND
local this_command=$(HISTTIMEFORMAT= history 1);
preexec "$this_command" &
}
trap 'preexec_invoke_exec' DEBUG
Runs the command, but with & afterwards, backgrounding the process.
Note that this will have other rather weird effects on your terminal, and anything supposed to run in the foreground (command line browsers, mail readers, interactive commands, anything requiring input, etc.) will have issues.
You can try this out by just typing bash, which will execute another shell. Paste the above code, and if things start getting weird, just exit out of the shell and things will reset.
Do you mean bash script? Jush add & at the end. Example :
$ ./myscript &
While it might be possible to do something clever like suggested by #pgl, it's not a good idea. Processes running in the background don't show you their output in a useful way. So, if all processes are automatically sent to the background, your terminal will be flooded with their various standard output and standard error messages but you will have no way of knowing what came from what, your terminal will be next to useless and confusion will ensue.
So, yes there is a very good reason to keep processes in the foreground: to see what they're doing and be able to control them easily. To give an even more concrete example, any program that requires you to interact with it can't be run in the background. This includes things that ask for Continue [Y/N]? or things like sudo that ask for your password. If you just blindly make everything run int the background such commands will just silently hang.
I'm running exercise 14 of Learn Ruby the Hard Way. If I run the script in cmd it works fine, but I've been using Cygwin because it's nicer. When I run it in cygwin using this command:
ruby ex14.rb Devon
I get the following output
test
one
two
Hi Devon, I'm the ex14.rb script.
I'd like to ask you a few questions.
Do you like me Devon?
> Where do you live Devon?
> What kind of computer do you have?
> Alright, so you said test about liking me.
You live in one. Not sure where that is.
And you have a two computer. Nice.
That is to say, the program starts and immediately runs the three STDIN.gets.chomp() commands, and once it gets through those it puts and prints everything at once.
Is there a way to fix this behaviour? I would obviously want to have the lines run in the order they are written. I was unsure what to google for this type of error - combinations of "cygwin", "ruby", "puts output delayed" and "gets out of order" returned nothing relevant. Those search terms seem to vague anyway.
What exactly is going on, and is there a solution?
I think it is all to do with the CR LF differences between dos and unix.
try this...
set -o igncr
before running your script.
I'm writing an Expect script and am having trouble dealing with the shell prompt (on Linux). My Expect script spawns rlogin and the remote system is using ksh. The prompt on the remote system contains the current directory followed by " > " (space greater-than space). A script snippet might be:
send "some command here\r"
expect " > "
This works for simple commands, but things start to go wrong when the command I'm sending exceeds the width of the terminal (or more precisely, what ksh thinks is the width of the terminal). In that case, ksh does some weird horizontal scrolling of the interactive command line, which seems to rewrite the prompt and stick an extra " > " in the output. Naturally this causes the Expect script to get confused and out of sync when there appears to be more than one prompt in the output after executing a command (my script contains several send/expect pairs).
I've tried changing PS1 on the remote system to something more distinctive like "prompt> " but a similar problem arises which indicates to me that's not the right way to solve this.
What I'm thinking might help is the ability for the script to tell Expect that "I know I'm properly synchronised with the remote system at this point, so flush the input buffer now." The expect statement has the -notransfer flag which doesn't discard the input buffer even if the pattern does match, so I think I need the opposite of that.
Are there any other useful techniques that I can use to make the remote shell behave more predictably? I understand that Expect goes through a lot of work to make sure that the spawned session appears to be interactive to the remote system, but I'd rather that some of the more annoying interactive features (such as the horizontal scrolling of ksh) be turned off.
If you want to throw away all output Expect has seen so far, try
expect -re $
This is a regexp match on $ which means the end of the input buffer, so it will just skip everything received so far. More details at the Expect man page.
You could try "set -o multiline" or COLUMNS=1000000 (or some other suitably large value).
I have had difficulty with ksh and Expect in the past. My solution was to use something other than
ksh for a login shell.
If you can change the remote login to other than ksh (using the chsh command or editing /etc/passwd) then you might try this with /bin/sh as the shell.
Another alternative is to tell KSH that the terminal is a dumb terminal - disallow it from doing any special processing.
$ export TERM=""
might do the trick.