Python Windows7: Odd behaviour opening file for append - windows-7

I am seeing odd behaviour when I open a file in append mode ('a+') under Windows 7 using Python.
I was wondering whether the behaviour is in fact incorrect or I am misunderstanding how to use the following code:
log_file= open(log_file_path, "a+")
return_code = subprocess.call(["make", target], stdout=log_file, stderr=subprocess.STDOUT)
log_file.close()
The above code lines does not properly append to the file. In fact on subsequent runs it won't even modify the file.
I tested it out using the Python Shell as well.
Once the file has been opened for the first time, making multiple subprocess calls will append properly to the file, however once the file has been closed and reopened it will never append again.
Anyone have any clues?
Thanks
To further simply the problem Here is another set of steps that will fail:
log_file=open("temp.txt", "a+")
log_file.write("THIS IS A TEST")
log_file.close()
log_file=open("temp.txt", "a+")
subprocess.call(["echo", "test"], stdout=log_file, stderr=subprocess.STDOUT, shell=True)
log_file.close()
If you open the file temp.txt here is what I see:
testS A MUTHER F** TEST

It looks like your problem is in the use of shell=True. From Python documentation for POpen:
On Unix, with shell=True: If args is a string, it specifies the
command string to execute through the shell. This means that the
string must be formatted exactly as it would be when typed at the
shell prompt. This includes, for example, quoting or backslash
escaping filenames with spaces in them. If args is a sequence, the
first item specifies the command string, and any additional items will
be treated as additional arguments to the shell itself.
So it looks like "echo" is the command, and "test" gets sent as an argument to the shell, instead of to "echo".
So changing your subprocess call to either:
subprocess.call("echo test", stdout=log_file, stderr=subprocess.STDOUT, shell=True)
or:
subprocess.call(["echo", "test"], stdout=log_file, stderr=subprocess.STDOUT)
Fixes the problem, at least in my testing.

see http://mail.python.org/pipermail/python-list/2009-October/1221841.html
briefly: opening a file in append mode leaves the file ptr in an implementation-dependent state. seek to the end to get the same results on windows as on linux.

Related

Bash adding unknown extra characters to command in script

I am currently trying to create a script that executes a program 100 times, with different parameters, typically pretty simple, but it's adding strange characters into the output filename that is passed into the command call for the program, the script i have written goes as follows
#!/bin/bash
for i in {1..100}
do ./generaterandomizedlist 10 input/input_10_$i.txt
done
I've taken a small screenshot of the output file name here
https://imgur.com/I855Hof
(extra characters are not recognized by chrome so simply pasting the name doesn't work)
It doesn't do this when i manually call the command issued in the script, any ideas?
Your script has some stray CRs in it. Use dos2unix or tr to fix it.

subprocess missing output file

I am completely new to python but I am trying to learn.
I would like to use subprocess command to run a simulation program that can me called in the terminal in a bash environment. The syntax is quite simple:
command inputfile.in
where in the the command is a greater simulation script in a tcltk environment.
Ok I have read a lot of the python literature and have decided to use the the Popen functionality of the subprocess command.
So from what I understand I should be able to format the command as follows:
p= subprocess.Popen(['command','inputfile.in'],stdout= subprocess.PIPE])
print(p.communicate())
The output of this command are two files. When I run the command in the terminal I get two files in the same directory as the original input file.
File1.fid File2.spe.
When I use Popen there are two things that confuse me.
(1) I do not get any output files written to directory of the input file. (2) The value p.communicate is present indicating that the simulation was run .
What happen to the output files. Is there a specified way to call a command function that produces files as a result?
I am running this in a Jupyter-notebook cell inside of for loop. This for loop serves to iteratively change the input file thus systematically varying conditions of the simulations. My operating systems is mac osx.
The goal is to simulated or run the command with each iteration of the for loop then store the output file data in a larger dictionary. Later I would like to compare the output file data to the experimental data iteratively in a optimization process that minimizes the residuals.
I would appreciate any help. Also any direction if popen is not the correct python function to do this.
Let's learn from something easy like this:
# This is equivalent with the command line `dir *.* /s /b` on Windows
import subprocess
sp = subprocess.Popen(['dir', '*.*', '/s', '/b'], stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
(std_out, std_err) = sp.communicate() # returns (stdout, stderr)
# print out if error occur
print('std_err: ', std_err) # expect ('std_err: ', '')
# print out saved echoing messages
print('std_out: ', std_out) # expect ('std_out: ', ... can be a long list ...)

How to paste multi-line input into Jupyter console?

The %paste magic for pasting multi-line input works with IPython 2, but fails with Jupyter console (on Mac OSX El Capitan).
~ > jupyter console
Jupyter Console 4.1.0
In [1]: %paste
ERROR: Line magic function `%paste` not found.
In [2]:
Going through the output of %lsmagic that lists all the magic commands indeed doesn't show %paste.
I tried to directly paste, but the indentation gets messed up, so something like %paste is needed apparently. Checking the official documentation (updated just 5 days ago) the word "paste" is not even mentioned.
So, how do you paste multi-line input to the console?
Ok. Found the solution. Jupyter console has a %cpaste magic that behaves a little different than the previous %paste but get the job done.
%cpaste:
Paste & execute a pre-formatted code block from clipboard.
You must terminate the block with '--' (two minus-signs) or Ctrl-D
alone on the line. You can also provide your own sentinel with '%paste
-s %%' ('%%' is the new sentinel for this operation).
The block is dedented prior to execution to enable execution of method
definitions. '>' and '+' characters at the beginning of a line are
ignored, to allow pasting directly from e-mails, diff files and
doctests (the '...' continuation prompt is also stripped). The
executed block is also assigned to variable named 'pasted_block' for
later editing with '%edit pasted_block'.
You can also pass a variable name as an argument, e.g. '%cpaste foo'.
This assigns the pasted block to variable 'foo' as string, without
dedenting or executing it (preceding >>> and + is still stripped)
'%cpaste -r' re-executes the block previously entered by cpaste.
'%cpaste -q' suppresses any additional output messages.
Do not be alarmed by garbled output on Windows (it's a readline bug).
Just press enter and type -- (and press enter again) and the block
will be what was just pasted.
IPython statements (magics, shell escapes) are not supported (yet).
See also
--------
paste: automatically pull code from clipboard.
Examples
--------
::
In [8]: %cpaste
Pasting code; enter '--' alone on the line to stop.
:>>> a = ["world!", "Hello"]
:>>> print " ".join(sorted(a))
:--
Hello world!

Why is gets throwing an error when arguments are passed to my ruby script?

I'm using gets to pause my script's output until the user hits the enter key. If I don't pass any arguments to my script then it works fine. However, if I pass any arguments to my script then gets dies with the following error:
ruby main.rb -i
main.rb:74:in `gets': No such file or directory - -i (Errno::ENOENT)
from main.rb:74:in `gets'
...
The error message is showing the argument I passed to the script. Why would gets be looking at ARGV?
I'm using OptionParser to parse my command line arguments. If I use parse! instead of parse (so it removes things it parses from the argument list) then the application works fine.
So it looks like gets is reading from ARGV for some reason. Why? Is this expected? Is there a way to get it to not do that (doing gets() didn't help).
Ruby will automatically treat unparsed arguments as filenames, then open and read the files making the input available to ARGF ($<). By default, gets reads from ARGF. To bypass that:
$stdin.gets
It has been suggested that you could use STDIN instead of $stdin, but it's usually better to use $stdin.
Additionally, after you capture the input you want from ARGV, you can use:
ARGV.clear
Then you'll be free to gets without it reading from files you may not have intended to read.
The whole point of Kernel#gets is to treat the arguments passed to the program as filenames and read those files. The very first sentence in the documentation reads:
Returns (and assigns to $_) the next line from the list of files in ARGV (or $*)
That's just how gets works. If you want to read from a specific IO object (say, $stdin), just call gets on that object.

C Shell: How to execute a program with non-command line arguments?

My $SHELL is tcsh. I want to run a C shell script that will call a program many times with some arguments changed each time. The program I need to call is in Fortran. I do not want to edit it. The program only takes arguments once it is executed, but not on the command line. Upon calling the program in the script, the program takes control (this is where I am stuck currently, I can never get out because the script will not execute anything until after the program process stops). At this point I need to pass it some variables, then after several iterations I will need to Ctrl+C out of the program and continue with the script.
How can this be done?
To add to what #Toybuilder said, you can use a "here document". I.e. your script could have
./myfortranprogram << EOF
first line of input
second line of input
EOF
Everything between the "<<EOF" and the "EOF" will be fed to the program's standard input (does Fortran still use "read (5,*)" to read from standard input?)
And because I think #ephemient's comment deserves to be in the answer:
Some more tips: <<'EOF' prevents
interpolation in the here-doc body;
<<-EOF removes all leading tabs (so
you can indent the here-doc to match
its surroundings), and EOF can be
replaced by any token. An empty token
(<<"") indicates a here-doc that stops
at the first empty line.
I'm not sure how portable those ones are, or if they're just tcsh extensions - I've only used the <<EOF type "here document" myself.
What you want to use is Expect.
Uhm, can you feed your Fortran code with a redirection? You can create a temporary file with your inputs, and then pipe it in with the stdin redirect (<).
This is a job for the unix program expect, which can nicely and easily interactively command programs and respond to their prompts.
I was sent here after being told my question was close to being a duplicate of this one.
FWIW, I had a similar problem with a csh C shell script.
This bit of code was allowing the custom_command to execute without getting ANY input arguments:
foreach f ($forecastTimes)
custom_command << EOF
arg1=x$f;2
arg2=ya
arg3=z,z$f
run
exit
EOF
end
It didn't work the first time I tried it, but after I backspaced out all of the white space in that section of the code I removed the space between the "<<" and the "EOF". I also backspaced the closing "EOF" all the way to the left margin. After that it worked:
foreach f ($forecastTimes)
custom_command <<EOF
arg1=x$f;2
arg2=ya
arg3=z,z$f
run
exit
EOF
end
Not a tcsh user, but if the program runs then reads in commands via stdin then you can use shell redirection < to feed it the required commands. If you run it in the background with & you will not block when it is executed. Then you can sleep for a bit, then use whatever tools you have (ps, grep, awk, etc) to discover the program's PID, then use kill to send it SIGTERM which is the same as doing a Ctrl-C.

Resources