I want to be able to set a system environment variable in Perl and I wrote the following script (inspired from this idea) to accomplish this:
use Win32;
use Win32::API;
use Win32::OLE qw( in );
my $machine = ".";
my $WMIServices = Win32::OLE->GetObject ( "winmgmts:{impersonationLevel=impersonate,(security)}//$machine/root/cimv2" ) || die;
my $objVariable = $WMIServices->Get("Win32_Environment")->SpawnInstance_;
$objVariable->{Name}="NewOSType";
$objVariable->{UserName} = "<System>";
$objVariable->{VariableValue} = "LAMBDA";
$objVariable->Put_;
However, I am not an expert in Perl and I would like to know what the experts think about this code. Is this the best way to accomplish this?
Another possible approach:
use strict;
use warnings;
use Win32::TieRegistry qw[:KEY_];
use Win32::API;
use constant HWND_BROADCAST => -1;
use constant WM_SETTINGCHANGE => 0x1a;
my $hklm_env = Win32::TieRegistry->new(
'HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/Session Manager/Environment',
{ Access => KEY_READ() | KEY_WRITE(), Delimiter => '/' }
); # returns undef if SYSTEM ENV not writable
my $hkcu_env = Win32::TieRegistry->new(
'HKEY_CURRENT_USER/Environment',
{ Access => KEY_READ() | KEY_WRITE(), Delimiter => '/' }
);
# if SYSTEM ENV not writable try USER ENV
my $e = defined($hklm_env) ? $hklm_env : $hkcu_env;
if(defined $e) {
$e->SetValue('Variable', 'Value');
}
else {
warn "Cannot open neither SYSTEM ENV nor USER ENV registry for Read+Write";
}
my $SendMessage = new Win32::API("user32", "SendMessage", 'NNNP', 'N') or die "Couldn't create SendMessage: $!\n";
my $RetVal = $SendMessage->Call(HWND_BROADCAST,WM_SETTINGCHANGE,0,'Environment');
Related
Is there something like nodemon, that monitors file changes and restarts a perl script?
My perl script is just a while loop.
I tried to google for it, but results are either about mod_perl or irrelevant.
Any ideas?
I'm using Mac OS
I don't know of a tool like nodemon for Perl, unless there's one for generically restarting any program when it changes. Here's one I knocked together.
#!/usr/bin/env perl
use strict;
use warnings;
use v5.10;
use File::Monitor;
use Child qw(child);
sub start_program {
my $program = shift;
return child {
exec $^X, $program, #ARGV;
};
}
sub restart_program {
my($program, $child) = #_;
$child->kill("TERM");
return start_program($program);
}
sub monitor_program {
my $program = shift;
my $monitor = File::Monitor->new;
my $child = start_program($program);
say "PID: ".$child->pid;
$monitor->watch($program, sub {
$child = restart_program($program, $child);
});
while(1) {
sleep 1;
$monitor->scan;
}
}
monitor_program(shift);
This could be made more efficient by replacing File::Monitor with something hooking into the OS X filesystem event service.
The following piece of code is supposed to read a few, undetermined in number, lines from a socket.
use warnings;
use strict;
use IO::Socket::INET;
my $server = shift;
my $port = shift;
my $sock = new IO::Socket::INET (
PeerAddr => $server,
PeerPort => $port,
Proto => 'tcp',
Timeout => 1,
Blocking => 0
)
or die "Could not connect";
while (my $in = <$sock>) {
print "$in";
}
print "Received last line\n";
Unfortunately, the $in = <$sock> part is blocking although I have set Blocking => 0 and the server does not send any more text. Hence, Received last line won't be printed.
So, I tried to improve the behavior with use IO::Select:
use warnings;
use strict;
use IO::Socket::INET;
use IO::Select;
my $server = shift;
my $port = shift;
my $sock = new IO::Socket::INET (
PeerAddr => $server,
PeerPort => $port,
Proto => 'tcp',
Timeout => 1,
Blocking => 1
)
or die "Could not connect";
my $select = new IO::Select;
$select -> add($sock);
sleep 1;
while ($select -> can_read) {
my $in = <$sock>;
print $in;
}
This second approach only prints the first sent line, then seems to block for ever.
Since I have seen such examples working, I believe the problem is Windows, on which I am trying to run these scripts.
Is there a way how I can achieve a non blocking read?
When using select (used by can_read), it defies the purpose to follow up with blocking IO. You must also avoid buffering IO because the system (i.e. select) doesn't know about any data in your library's buffers. This means you can't mix select with read, readline (aka <> and <$fh>) and eof. You must use sysread.
my %clients;
for my $fh ($select->can_read()) {
my $client = $clients{$fh} //= {};
our $buf; local *buf = $client->{buf} //= ''; # alias my $buf = $client->{buf};
my $rv = sysread($sock, $buf, 64*1024, length($buf));
if (!defined($rv)) {
my $error = $!;
$select->remove($fh);
delete($clients{$fh});
# ... Handle error ...
next;
}
if (!$rv) {
$select->remove($fh);
delete($clients{$fh});
# ... Handle EOF ... # Don't forget to check if there's anything in $buf.
next;
}
... remove any complete messages from $buf and handle them ...
}
If you want to read a line at a time, you'd use
while ($buf =~ s/^([^\n]*)\n//) {
process_msg($client, $1);
}
I am using the module DBD::Oracle in perl to insert xml contents into oracle 11 g instance. While inserting some of the documents in my sample set the script fails as the module returns Unsupported named object type for bind parameter. I would like to handle this error and make the loop iteration to go on.
following is my code,
use strict;
use warnings;
use DBI;
use DBD::Oracle qw(:ora_session_modes);
use DBD::Oracle qw(:ora_types);
die("USAGE: $0 <input_directory>") unless ($#ARGV == 0);
my $directory=$ARGV[0];
my $dbh = DBI->connect('dbi:Oraclle:dbname',"username", "pass");
my $SQL;
opendir(IMD, $directory) || die ("Cannot open directory");
my #listOfFiles= readdir(IMD);
closedir(IMD);
my $xmltype_string;
my $xml;
my $i = 1;
foreach my $file(#listOfFiles)
{
unless($file eq '.' or $file eq '..')
{
print "inserting File no. $i \t $file .... \n";
{
local $/=undef;
open (FILE , "<" , "$directory/$file" );
$xml=<FILE>;
close (FILE);
}
$SQL="insert into sampleTable values ( :ind, :xml)";
my $sth =$dbh-> prepare($SQL);
$sth->bind_param(":xml" , $xml , { ora_type => ORA_XMLTYPE});
$sth->bind_param(":ind" , $i);
$sth-> execute();
$i++;
}
}
Am getting the error in bind param.
Error handling is usually done via the Try::Tiny module:
use Try::Tiny;
try {
something_that_could_die();
}
catch {
handle_error($_);
}
finally {
do_something_either_way();
}; # ← trailing semicolon not optional.
Both catch and finally are optional.
I found this example (posted by #ikegami) of a way to use IPC::Open3 on windows using sockets. The problem is that, when I run it, I get an error An existing connection was forcibly closed by the remote host on the sysread. The command runs, the select works correctly, but the sysread is getting an undef instead of the expected 0 for end of file. This behavior is not the same for all commands. If I change the command to echo Hello World! it does not cause the error. Any idea what is going on here?
Here is the code from the example:
use strict;
use warnings;
use IO::Select qw( );
use IPC::Open3 qw( open3 );
use Socket qw( AF_UNIX SOCK_STREAM PF_UNSPEC );
print( "REMOVE ME: getting started\n" );
sub _pipe {
socketpair($_[0], $_[1], AF_UNIX, SOCK_STREAM, PF_UNSPEC)
or return undef;
shutdown($_[0], 1); # No more writing for reader
shutdown($_[1], 0); # No more reading for writer
return 1;
}
sub _open3 {
local (*TO_CHLD_R, *TO_CHLD_W);
local (*FR_CHLD_R, *FR_CHLD_W);
local (*FR_CHLD_ERR_R, *FR_CHLD_ERR_W);
if ($^O =~ /Win32/) {
_pipe(*TO_CHLD_R, *TO_CHLD_W ) or die $^E;
_pipe(*FR_CHLD_R, *FR_CHLD_W ) or die $^E;
_pipe(*FR_CHLD_ERR_R, *FR_CHLD_ERR_W) or die $^E;
} else {
pipe(*TO_CHLD_R, *TO_CHLD_W ) or die $!;
pipe(*FR_CHLD_R, *FR_CHLD_W ) or die $!;
pipe(*FR_CHLD_ERR_R, *FR_CHLD_ERR_W) or die $!;
}
my $pid = open3('>&TO_CHLD_R', '<&FR_CHLD_W', '<&FR_CHLD_ERR_W', #_);
return ( $pid, *TO_CHLD_W, *FR_CHLD_R, *FR_CHLD_ERR_R );
}
# when i change the command to 'echo Hello World' it works...
my ($pid, $to_chld, $fr_chld, $fr_chld_err) =
_open3('cmd /c "dir /s/b"');
my %objs;
my $in_sel = IO::Select->new();
my $out_sel = IO::Select->new();
for my $fh ($fr_chld, $fr_chld_err) {
my $obj = {
buf => '',
};
$objs{ fileno($fh) } = $obj;
$in_sel->add($fh);
}
close($to_chld);
while ($in_sel->count() + $out_sel->count()) {
my ($ins, $outs) = IO::Select::select($in_sel, $out_sel, undef);
for my $fh (#$ins) {
my $obj = $objs{ fileno($fh) };
our $buf; local *buf = \( $obj->{buf} );
my $bytes_read = sysread($fh, $buf, 64*1024, length($buf));
if (!$bytes_read) {
warn("Error reading from child: $!\n")
if !defined($bytes_read);
$in_sel->remove($fh);
}
}
for my $fh (#$outs) {
}
}
waitpid($pid, 0);
print("STDOUT:\n$objs{ fileno( $fr_chld ) }{buf}");
print("\n" );
print("STDERR:\n$objs{ fileno( $fr_chld_err ) }{buf}");
I think it's because something like shutdown was used instead of something like close. Sounds safe to ignore.
grep $!{$_}, keys %! shows ECONNRESET, so just change
warn("Error reading from child: $!\n")
if !defined($bytes_read);
to
warn("Error reading from child: $!\n")
if !defined($bytes_read) && !$!{ECONNRESET};
Does additional code in the :content_cb-callback slow down the download?
Supposed the additional code would take 1_000/1_000_000 seconds to run and the callback gets called 1_000 times, would that slow down the download for 1_000/1_000_000 * 1_000 seconds?
#!/usr/bin/env perl
use warnings;
use 5.012;
use Time::HiRes qw(usleep);
use File::Basename;
use LWP::UserAgent;
my $url = 'my_url';
my $file_name = basename $url;
my $ua = LWP::UserAgent->new();
open my $fh, '>>:raw', $file_name or die $!;
my $res = $ua->get(
$url,
':content_cb' => sub {
my ( $chunk, $res, $proto ) = #_;
print $fh $chunk;
usleep( 1000 ); # code substitute
},
);
close $fh;
I would recommend to use HTTP::Async module.
use HTTP::Async;
my $url = 'http://...';
my $async = HTTP::Async->new;
$async->add( HTTP::Request->new( GET => $url ) );
while ( my $response = $async->wait_for_next_response ) {
# Do some processing with $response
}