Windows process called by perl doesn't write to file properly - windows

I'm trying to call a perl script from another perl script, read from serial port, and write to a file. I've distilled my code to isolate the problem, so it probably won't make sense what the point of the code is.
Caller:
use Win32::Process;
my $perl_path = $^X;
my $SerialLogProcess;
my $SerialLogObj;
my $serial_log_script = "callee.pl";
Win32::Process::Create($SerialLogObj, "$perl_path", "perl $serial_log_script " ,0,NORMAL_PRIORITY_CLASS,".");
$SerialLogProcess = $SerialLogObj->GetProcessID();
print "waiting for 3 secs";
sleep(3);
print "done";
`taskkill /F /T /PID $SerialLogProcess`;
Callee:
use Win32::SerialPort;
my $portObj = new Win32::SerialPort("\\\\.\\COM70") || die;
my $serialReading;
$portObj->baudrate(115200);
$portObj->parity("none");
$portObj->databits(8);
$portObj->stopbits(1);
system("rm -r \"log.txt\"");
open (LOGFILE, ">>log.txt") or die;
while (1){
$serialReading = $portObj->read(10);
print LOGFILE $serialReading;
#print LOGFILE " ";
}
So the caller creates a process for the callee script, and then kills it after 3 seconds. In those 3 seconds, I do something that gives guaranteed messages for the serial port to read. The log file is created, but nothing is written to it.
Heres what's weird: I can make the messages show up two ways. I run the callee script straight from the command line, or I can uncomment that last print. Unfortunately, these aren't solutions for me. I'm pretty stumped why I can't get my code to work properly, and it makes me think there is some kind of undefined behavior.

The output is buffered by default. The data will eventually be written once there's enough to write. Or you can use the following which will cause a flush for every print.
use IO::Handle qw( ); # Needed before Perl 5.14
open (LOGFILE, ">>log.txt") or die;
LOGFILE->autoflush(1);
But ug, why use a global variable???
open (my $LOGFILE, ">>log.txt") or die;
$LOGFILE->autoflush(1);
print $LOGFILE $serialReading;

Related

Does changing Perl 6's $*OUT change standard output for child processes?

I was playing around with shell and how it acts when I change the standard filehandles in the calling program. Proc says:
$in, $out and $err are the three standard streams of the to-be-launched program, and default to "-", which means they inherit the stream from the parent process.
As far as I can tell, the external program doesn't use the same file handles:
#!/Applications/Rakudo/bin/perl6
#`(
make an external Perl 6 program the outputs to standard handles
)
my $p6-name = 'in-out.p6'.IO;
#END try $p6-name.unlink; # why does this cause it to fail?
my $p6-fh = open $p6-name, :w;
die "Could not open $p6-name" unless ?$p6-fh;
$p6-fh.put: Q:to/END/;
#!/Applications/Rakudo/bin/perl6
$*ERR.say( qq/\t$*PROGRAM: This goes to standard error/ );
$*OUT.say( qq/\t$*PROGRAM: This goes to standard output/ );
END
$p6-fh.close;
say $p6-name.e ?? 'File is there' !! 'File is not there';
die "$p6-name does not exist" unless $p6-name.e;
{
#`(
Start with some messages to show that we can output to
the standard filehandles.
)
$*OUT.put: "1. standard output before doing anything weird";
$*ERR.put: "2. standard error before doing anything weird";
shell( "perl6 $p6-name" ).so;
}
{
#`(
This block assigns a new filehandle to $*OUT and prints a
message to it. I expect that message to not show up in the
terminal.
It then calls run-them to fire off the external process. It
should inherit the same standard out and its standard out
messages should not show up. But, they do.
)
temp $*OUT = open '/dev/null', :w;
$*OUT.put: "3. temp redefine standard output before this message";
shell( "perl6 $p6-name" ).so;
}
$*OUT.put: "4. everything should be back to normal";
The output shows that when I open /dev/null and assign its filehandle to $*OUT, the output from the current program don't show up in terminal (there's no output starting with 3.). However, when I call shell, its standard output goes to the original standard output:
File is there
1. standard output before doing anything weird
2. standard error before doing anything weird
in-out.p6: This goes to standard error
in-out.p6: This goes to standard output
in-out.p6: This goes to standard error
in-out.p6: This goes to standard output
4. everything should be back to normal
I'm not worried about how to make this happen. I can create a Proc object and pass filehandles to it.
Is there something else going on?
By default the IO::Handle that is in $*OUT is bound to the low-level STDOUT filehandle given by the operating system.
shell and run just let the spawned process use the low-level STDOUT file that was given to Perl 6, unless you specify otherwise.
Perl 6 doesn't change anything about the outside environment until the moment before it spawns a new process.
The simplest thing to do is to give the filehandle object you want to use to the shell or run call with a named argument.
# no testing for failure because the default is to throw an error anyway
my $p6-name = 'in-out.p6'.IO;
END $p6-name.unlink;
$p6-name.spurt(Q'put "STDOUT: #*ARGS[0]";note "STDERR: #*ARGS[0]"');
run $*EXECUTABLE, $p6-name, 'run', :out(open '/dev/null', :w);
{
temp $*OUT = open '/dev/null', :w;
shell "'$*EXECUTABLE' '$p6-name' 'shell'", :err($*OUT);
}
This results in
STDERR: run
STDOUT: shell
In the particular case of throwing away the output data, :!out or :!err should be used instead.
run $*EXECUTABLE, $p6-name, 'no STDERR', :!err;
STDOUT: no STDERR
If you just want the data to be intercepted for you :out and :err do just that;
my $fh = run( $*EXECUTABLE, $p6-name, 'capture', :out ).out;
print 'captured: ',$fh.slurp-rest;
captured: STDOUT capture

HWUT - selectively printing from read buffer into .exe file in OUT folder

I am receiving data from serial port. I use HWUT for comparing my test results. The content from receive buffer cannot be directly used for comparison of GOOD and OUT result. Becuase the OUT will always have unnecessary command prompts, enters and other stuff. I am looking to select what must be written from read buffer into OUT file. For example below is an example
←[36m
A> target cmd
←[36m
{t=3883.744541 s} Received data
A> result : 1
bytes read 518Closing serial port...OK
And I would like the out file to only have 'result : 1'.
When i checked the code, messages.py seems to be printing to std out. But not sure if that is being used for printing into OUT file. How can this be achieved?
Anything that you print to 'stdout' should appear in the "OUT/*" files. If it does not, then this would have nothing to do with receiption via serial line(s). Here is what I would do to analyze:
In your connector application there must be something like
receive_n = receive(.., &buffer[0], Size);
buffer[receive_n] = '\0'; /* terminating zero */
printf("%s", &buffer[0]);
If this is so, then
Write in paralell into a log file.
static log_fh = fopen("tmp.log", "wb");
...
printf("%s", &buffer[0]);
fwrite((void*)buffer, 1, received_n, log_fh);
Compare 'tmp.log' with the file in OUT.
If there is a difference, HWUT is to blame.
Check the output before you write it.
if( my_condition(buffer, received_n) ) printf("%s", &buffer[0]);
HWUT has an internal infrastructure to post-process test output, but it is not documented and therefore not reliable--at the time of this writing.
Edit the file "hwut-info.dat" in your TEST directory.
These R my Tests on Something Important (Title)
-------------------------------------------------------
--not *.exe
bash execute-this.sh
-------------------------------------------------------
The --not *.exe makes sure that HWUT will not execute the *.exe files which you compiled. The bash execute-this.sh line lets HWUT consider the file execute-this.sh as a test application and call it with 'bash'.
Inside the execute-this.sh you might want to make your application, execute it and filter the output, i.e.
#! bash
make my-test.exe
./my-test.exe | awk ' /^A>/ '
which will print only those lines which start with 'A>'. grep and awk are your friends, here. You might want to familiarize yourself with these two.
Alternatively, you may filter directly in your connection application.

Perl Windows - watch a file and read last line

I'm trying to watch a file in Windows Perl. I'm using Win32::ChangeNotify
Here is my code:
use strict;
use warnings;
require Win32::ChangeNotify;
use Data::Dumper;
my $Path="C:\\Eamorr\\";
my $WatchSubTree=0;
my $Events="FILE_NAME";
my $notify=Win32::ChangeNotify->new($Path,$WatchSubTree,$Events);
while(1){
$notify->reset;
$notify->wait;
print "File changed\n";
}
But "File changed" never gets printed! I realise this is quite basic stuff, but I'm really struggling on this Windows platform.
I have a file in "C:\Eamorr\Eamorr.out" which I want to monitor for changes (a new line of data is appended to this file every ten minutes by another program).
When Eamorr.out is updated, I want to be able to run some Perl and populate a MySQL table.
Please help me watching the file Eamorr.out and printing the last line to the console.
p.s. I'm on Windows Server 2003
Many thanks in advance,
This works on my Windows 7, Activestate Perl 5.16.
use feature ":5.16";
use warnings FATAL => qw(all);
use strict;
use Data::Dump qw(dump);
use Win32::ChangeNotify;
my $Path='C:\Phil\z\Perl\changeNotify\\';
my $WatchSubTree = 0;
my $Events = "SIZE";
say STDERR "Exists=", -e $Path;
my $notify=Win32::ChangeNotify->new($Path,$WatchSubTree,$Events) or say("Error=", Win32::GetLastError());
while(1)
{$notify->reset;
$notify->wait;
say STDERR "File changed";
}

Perl Net::SSH2 scp_put puts file and then hangs

I'm using Net::SSH2's scp_put method to place one file in my home directory on a Unix server from a Windows box. I am using Strawberry Perl 5.12 (portable version). I installed the libssh2 1.2.5 binaries and then Net::SSH2 from cpan.
Here's my code snippet:
sub uploadToHost{
my $file=#_[0];
my $host=#_[1];
my $user=#_[2];
my $pass=#_[3];
my $remotelocation=#_[4];
#makes a new SSH2 object
my $ssh=Net::SSH2->new() or die "couldn't make SSH object\n";
#prints proper error messages
$ssh->debug(1);
#nothing works unless I explicitly set blocking on
$ssh->blocking(1);
print "made SSH object\n";
#connect to host; this always works
$ssh->connect($host) or die "couldn't connect to host\n";
print "connected to host\n";
#authenticates with password
$ssh->auth_password($user, $pass) or die "couldn't authenticate $user\n";
print "authenticated $user\n";
#this is the tricky bit that hangs
$ssh->scp_put($file, $remotelocation") or die "couldn't put file in $remotelocation\n";
print "uploaded $file successfully\n";
$ssh->disconnect or die "couldn't disconnect\n";
} #ends sub
Output (edited for anonymity):
made SSH object\n
connected to host\n
authenticated \n
libssh2_scp_send_ex(ss->session, path, mode, size, mtime, atime) -> 0x377e61c\n
Net::SSH2::Channel::read(size = 1, ext = 0)\n
It then hangs forever (>40 minutes in one test) and needs to be killed.
What's strange is that it actually does scp the file to the remote server! It only hangs after it should have completed. I couldn't find references to this curious problem elsewhere on StackOverflow or elsewhere.
Can anyone point me in the right direction to either 1) stop it from hanging, or 2) implement (as a workaround) a timer that kills this one command after a few seconds, which is enough time to scp the file?
Thanks, everyone!
You can try using alarm() to prod your process into behaving, if you save this example as 'alarm.pl' you can see how it works:
use strict;
use warnings;
use 5.10.0;
# pretend to be a slow process if run as 'alarm.pl s'
if (#ARGV && $ARGV[0] eq 's') {
sleep(30);
exit();
}
# Otherwise set an alarm, then run myself with 's'
eval {
local $SIG{ALRM} = sub {die "alarmed\n"};
alarm(5);
system("perl alarm.pl s");
};
if ($#) {
die $# unless $# eq "alarmed\n";
say "Timed out slow process";
}
else {
say "Slow process finished";
}
Use Net::SFTP::Foreign with the Net::SSH2 backend, Net::SFTP::Foreign::Backend::Net_SSH2:
use Net::SFTP::Foreign;
my $sftp = Net::SFTP::Foreign->new($host, user => $user, password => $password, backend => Net_SSH2);
$sftp->die_on_error("Unable to connect to remote host");
$sftp->put($file, $remotelocation);
$sftp->die_on_error("Unable to copy file");
If that doesn't work either, you can try using plink (from the PuTTY project) instead of the Net::SSH2 backend.
I don't think it is hanging it is just REALLY SLOW. 10x slower than what it should be. The reason the file would appear to be there is that it allocates the file before it has finished transferring. This isn't really too unexpected, Perl finds new ways to disappoint and frustrate programmers on a daily basis. Sometimes I think I spend more time working around Perl's idiosyncrasies and learning 10 slightly different ways to do the same thing than doing real work.

Why does my Perl script halt if CGI module is used after reading from stdin on Windows?

I'm trying to implement a progress indicator for file uploads. Part1 and Part2 of the script run properly if executed separately. But if executed together the script halts at:
my $cg = new CGI();
The problem only occurs on an Windows server. What could be the reason?
#!C:\Perl\bin\perl.exe -w
use CGI;
$post_data_filename = "C:\\test\\postdata.txt";
$uploaded_filename = "C:\\test\\uploaded_file.txt";
#PART 1
# read and store the raw post data in a temporary file so that we can repeatedly
# look at size of this temporary file in order to implement a progress bar
open(TMP,">","$post_data_filename");
$len = $ENV{'CONTENT_LENGTH'};
read (STDIN ,$LINE, $len);
print TMP $LINE;
close (TMP);
#PART 2
#use a CGI instance to read the raw post data and extract the uploaded file from it
my $cg = new CGI();
open(STDIN,"$post_data_filename");
my $fh = $cg->upload('file[0]');
open($tmp_fh, ">$uploaded_filename");
while(<$fh>) {
print $tmp_fh $_;
}
close($tmp_fh);
print "Content-type: text/html\n\n";
print "Ready\n";
exit;
Try doing binmode(STDIN) before reading from it. I'm guessing you are ending up with fewer bytes than the content length says and that's causing CGI to mess up. You may need to also do the binmode after reopening STDIN.
Also, please check all your IO operations for success.
On windows a file can't be open for reading while another process has it open for writing.
And your upload meter won't work because you read the whole STDIN and then write it to TMP, so you go from 0% straight to 100%.

Resources