How do I pipe output from one python script as input to another python script? - bash

For example:
A script1.py gets an infix expression from the user and converts it to a postfix expression and returns it or prints it to stdout
script2.py gets a postfix expression from stdin and evaluates it and outputs the value
I wanted to do something like this:
python3 script1.py | python3 script2.py
This doesn't work though, could you point me in the right direction as to how I could do this?
EDIT -
here are some more details as to what "doesn't work".
When I execute python3 script1.py | python3 script2.py
the terminal asks me for input for the script2.py program, when it should be asking for input for the script1.py program and redirecting that as script2.py's input.
So it asks me to "Enter a postfix expression: ", when it should be asking "Enter an infix expression: " and redirect that to the postfix script.

If I undestand your issue correctly, your two scripts each write out a prompt for input. For instance, they could both be something like this:
in_string = input("Enter something")
print(some_function(in_string))
Where some_function is a function that has different output depending on the input string (which may be different in each script).
The issue is that the "Enter something" prompt doesn't get displayed to the user correctly when the output of one script is being piped to another script. That's because the prompt is written to standard output, so the first script's prompt is piped to the second script, while the second script's prompt is displayed. That's misleading, since it's the first script that will (directly) receive input from the user. The prompt text may also mess up the data being passed between the two scripts.
There's no perfect solution to this issue. One partial solution is to write the prompt to standard error, rather than standard output. This would let you see both prompts (though you'd only actually be able to respond to one of them). I don't think you can directly do that with input, but print can write to other file streams if you want: print("prompt", file=sys.stderr)
Another partial solution is to check if your input and output streams and skip printing the prompts if either one is not a "tty" (terminal). In Python, you can do sys.stdin.isatty(). Many command line programs have a different "interactive mode" if they're connected directly to the user, rather than to a pipe or a file.
If piping the output around is a main feature of your program, you may not want to use prompts ever! Many standard Unix command-line programs (like cat and grep) don't have any interactive behavior at all. They require the user to pass command line arguments or set environment variables to control how they run. That lets them work as expected even when they don't have access to standard input and standard output.

For example if you have nginx running and script1.py:
import os
os.system("ps aux")
and script2.py
import os
os.system("grep nginx")
Then running:
python script1.py | script2.py
will be same as
ps aux | grep nginx

For completion's sake, and to offer an alternative to using the os module:
The fileinput module takes care of piping for you, and from running a simple test I believe it'll make it an easy implementation.
To enable your files to support piped input, simply do this:
import fileinput
with fileinput.input() as f_input: # This gets the piped data for you
for line in f_input:
# do stuff with line of piped data
all you'd have to do then is:
$ some_textfile.txt | ./myscript.py
Note that fileinput also enables data input for your scripts like so:
$ ./myscript.py some_textfile.txt
$ ./myscript.py < some_textfile.txt
This works with python's print output just as easily:
>test.py # This prints the contents of some_textfile.txt
with open('some_textfile.txt', 'r') as f:
for line in f:
print(line)
$ ./test.py | ./myscript.py
Of course, don't forget the hashbang #!/usr/bin/env python at the top of your scripts for this way to work.
The recipe is featured in Beazley & Jones's Python Cookbook - I wholeheartedly recommend it.

Related

Send Ctrl + d to a server via bash

I'm working on a script, that requires you press control + d when you complete your entries. I'd like to send this command so I can just script my work rather than having to redo my work.
You're probably talking about the "end of transmission" delimiter which is used to indicate the end of user input. If that's the case then you can always pipe data into your script. That is, instead of this:
$ test_script.sh
My input!
^D
You'd write that data to a file:
$ cat > input
My input!
^D
Then pipe that into the script:
$ test_script.sh < input
No ^D is required because once that file is fully read the script is signalled accordingly. The < shell operator switches STDIN to read from a file instead of the terminal. Likewise, > can be used to capture the output of a program and save it to a file, as done in the second step here, though you can use any tool you'd like to create or edit that input file.
This works with pretty much any scripting language, from Python, Perl, Ruby to Node.js as well as bash and other shells.

What is the difference between these two Bash commands?

What's the difference between these two Bash commands? :
bash <(curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered)
curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered | bash
The first command gave me this prompt:
Are you really sure you want to do this ? (y/N) ?
but the second did not.
In the first command, bash inherits its standard input from its parent. Assuming you typed the command at your prompt, the parent would be your interactive shell, whose standard input is (in the absence of any other change) your terminal emulator.
In the second command, bash's standard input is the output of curl, not a terminal, which means the standard input of the script executed by bash is also the output of curl.
Whatever command is asking for confirmation only does so if it detects that standard input is a terminal. Worse, if the script is trying to read from standard input, it may actually consume part of itself, if it wins the race condition with bash for reading from the pipe.
The correct thing to do (and the secure thing) is to save the output of curl to a file first, then verify what it is you are running before actually doing so.
curl -sL https://raw.githubusercontent.com/node-red/raspbian-deb-package/master/resources/update-nodejs-and-nodered > update-script
# look at update-script
bash update-script
By "look", I mean either visually inspect the output, or at least compare a locally computed checksum with a checksum provided by the source to ensure that the bytes you received are the bytes that you were supposed to get. (This guards agains network corruption, man-in-the-middle attacks, etc.)

Shebang pointing to script (also having shebang) is effectively ignored

Consider following code:
#!/usr/bin/env python
import sys
print "Hello! I've got %r as input." % sys.stdin.read()
This is chmod +xed script in /usr/local/bin/my_interpreter. And this:
#!/usr/local/bin/my_interpreter
This is intended to be passed "as is" to python script.
Is chmod +xed script that tries to make use of it. If I echo something | /usr/local/bin/my_interpreter, it works fine, but once I try to execute script above, it fails with
/Users/modchan/test_interpreter/foo.bar: line 3: This: command not found
Seems that foo.bar is silently redirected to bash instead of my script. What am I doing wrong? How to make this work?
Looks like Mac OS X requires interpreter to be binary, not another script. To make it work, change the second script's interpreter to
#!/usr/bin/env /usr/local/bin/my_interpreter
But you've got a second problem here: the contents of the second script will not go to stdin of its interpreter, but the script pathname will be passed as command line argument, i.e.
/usr/bin/env /usr/local/bin/my_interpreter /Users/modchan/test_interpreter/foo.bar
You shall read the file by name sys.argv[1] rather than from sys.stdin.
This depends on the program loader of the operating system you're running, which I take to be OS X from your tags. Many UNIX-like operating systems require the shebang interpreter to be a compiled executable binary, not another script with another shebang.
http://en.wikipedia.org/wiki/Shebang_(Unix)
Linux has supported this since 2.6.27.9, but the author of this article suggests that there probably aren't any Berkeley-derived Unixen (which would probably include OS X) that do:
http://www.in-ulm.de/~mascheck/various/shebang/#interpreter-script
One way to accomplish what you want would be something like this:
$!/bin/sh
exec /usr/local/bin/my_interpreter <<EOM
... content to be executed ...
EOM
Another way would be something like this:
$!/usr/bin/env /usr/local/bin/my_interpreter
... content to be executed ...

Ruby not showing output of internal process

I'm trying this in ruby.
I have a shell script to which I can pass a command which will be executed by the shell after some initial environment variables have been set. So in ruby code I'm doing this..
# ruby code
my_results = `some_script -allow username -cmd "perform_action"`
The issue is that since the script "some_script" runs "perform_action" in it's own environment, I'm not seeing the result when i output the variable "my_results". So a ruby puts of "my_results" just gives me some initial comments before the script processes the command "perform_action".
Any clues how I can get the output of perform_action into "my_results"?
Thanks.
The backticks will only capture stdout. If you are redirecting stdout, or writing to any other handle (like stderr), it will not show up in its output; otherwise, it should. Whether something goes into stdout or not is not dependent on an environment, only on redirection or direct writing to a different handle.
Try to see whether your script actually prints to stdout from shell:
$ some_script -allow username -cmd "perform_action" > just_stdout.log
$ cat just_stdout.log
In any case, this is not a Ruby question. (Or at least it isn't if I understood you correctly.) You would get the same answer for any language.

Switch from file contents to STDIN in piped command? (Linux Shell)

I have a program (that I did not write) which is not designed to read in commands from a file. Entering commands on STDIN is pretty tedious, so I'd like to be able to automate it by writing the commands in a file for re-use. Trouble is, if the program hits EOF, it loops infinitely trying to read in the next command dropping an endless torrent of menu options on the screen.
What I'd like to be able to do is cat a file containing the commands into the program via a pipe, then use some sort of shell magic to have it switch from the file to STDIN when it hits the file's EOF.
Note: I've already considered using cat with the '-' for STDIN. Unfortunately (I didn't know this before), piped commands wait for the first program's output to terminate before starting the second program -- they do not run in parallel. If there's some way to get the programs to run in parallel with that kind of piping action, that would work!
Any thoughts? Thanks for any assistance!
EDIT:
I should note that my goal is not only to prevent the system from hitting the end of the commands file. I would like to be able to continue typing in commands from the keyboard when the file hits EOF.
I would do something like
(cat your_file_with_commands; cat) | sh your_script
That way, when the file with commands is done, the second cat will feed your script with whatever you type on stdin afterwards.
Same as Idelic answer with more simple syntax ;)
cat your_file_with_commands - | sh your_script
I would think expect would work for this.
Have you tried using something like tail -f commandfile | command I think that should pipe the lines of the file to command without closing the file descriptor afterwards. Use -n to specify the number of lines to be piped if tail -f doesn't catch all of them.

Resources