I know this is a duplicate, but my question was not answered in any other threads. the output of sudo cpanm WWW::Mechanize is to long to put in tread. pastebin: 3BYUtSss
I tried executing a perl script, and I get this error:
Can't locate WWW/Mechanize.pm in #INC (#INC contains: /opt/local/lib/perl5/site_perl/5.16.3/darwin-thread-multi-2level /opt/local/lib/perl5/site_perl/5.16.3 /opt/local/lib/perl5/vendor_perl/5.16.3/darwin-thread-multi-2level /opt/local/lib/perl5/vendor_perl/5.16.3 /opt/local/lib/perl5/5.16.3/darwin-thread-multi-2level /opt/local/lib/perl5/5.16.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at io.pl line 5.
In case you need it, here is my perl script's contents:
#!/usr/bin/env perl
use warnings;
use strict;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new();
my ($get,$host,$title);
while (<>) {
if (m|^GET (\S+) |) {
$get = $1;
} elsif ( m|^Host: (\S+)\.| ) {
$host = $1;
} else {
# Unrecognized line...reset
$get = $host = $title = '';
}
if ($get and $host) {
my ($title) = $get =~ m|^.*\/(.+?)$|; # default title
my $url = 'http://' . $host . $get;
$mech->get($url);
if ($mech->success) {
# HTML may have title, images will not
$title = $mech->title() || $title;
}
print "Title: $title\n";
print "URL: $url\n";
print "\n";
$get = $host = $title = '';
}
}
These look to be the key lines in the output from cpanm, down at the bottom.
! Installing the dependencies failed: Installed version (3.59) of CGI is not in range '4.08'
! Bailing out the installation for WWW-Mechanize-1.75.
Looks like you need to install a higher version of the CGI distribution.
The key lines in the cpanm output are:
Building and testing CGI-4.21 ... FAIL
! Installing CGI failed. See /Users/skylerspaeth/.cpanm/work/1440436409.90704/build.log for details. Retry with --force to force install it.
So look in /Users/skylerspaeth/.cpanm/work/1440436409.90704/build.log and see what the problem is. If that log is no longer there, you may need to run cpanm again, which will generate another build.log.
You find the key lines in cpanm output by searching for "fail". Usually, it'll point you at a build.log file for further details.
Related
I've written this script (called SpeedTest.pl) to log internet speed due to resolve a problem with my ISP.
It work well, but just if I use a Perl interpreter (if I double-click on the script). I want to compile it to generate a stand-alone executable to run in a different PC without Perl installed.
Well, I've try with pp and Perl2Exe both, but when I launch the SpeedTest.exe i see a lot of process called "SpeedTest.exe" in task manager. If I don't block all these process, the PC OS will crash (a pop-up say: "the memory can't be written, blah blah blah).
Any ideas?
This is the script:
#!/usr/local/bin/perl
use strict;
use warnings;
use App::SpeedTest;
my($day, $month_temp, $year_temp)=(localtime)[3,4,5];
my $year = $year_temp+1900;
my $month = $month_temp+1;
my $date = "0"."$day"."-"."0"."$month"."-"."$year";
my $filename = "Speed Test - "."$date".".csv";
if (-e $filename) {
goto SPEEDTEST;
} else {
goto CREATEFILE;
}
CREATEFILE:
open(FILE, '>', $filename);
print FILE "Date".";"."Time".";"."Download [Mbit/s]".";"."Upload [Mbit/s]".";"."\n";
close FILE;
goto SPEEDTEST;
SPEEDTEST:
my $download = qx(speedtest -Q -C --no-upload);
my $upload = qx(speedtest -Q -C --no-download);
my #download_chars = split("", $download);
my #upload_chars = split("", $upload);
my $time = "$download_chars[12]"."$download_chars[13]"."$download_chars[14]"."$download_chars[15]"."$download_chars[16]";
my $download_speed = "$download_chars[49]"."$download_chars[50]"."$download_chars[51]"."$download_chars[52]"."$download_chars[53]";
my $upload_speed = "$upload_chars[49]"."$upload_chars[50]"."$upload_chars[51]"."$upload_chars[52]"."$upload_chars[53]";
my $output = "$date".";"."$time".";"."$download_speed".";"."$upload_speed".";";
open(FILE, '>>', $filename);
print FILE $output."\n";
close FILE;
sleep 300;
my($day_check, $month_temp_check, $year_temp_check)=(localtime)[3,4,5];
my $year_check = $year_temp_check+1900;
my $month_check = $month_temp_check+1;
my $date_check = "0"."$day_check"."-"."0"."$month_check"."-"."$year_check";
my $filename_check = "Speed Test - "."$date_check".".csv";
if ($filename = $filename_check) {
goto SPEEDTEST;
} else {
$filename = $filename_check;
goto CREATEFILE;
}
Well, Steffen really answered this by way of a Comment, but here it is as an Answer. Just compile your Perl into an EXE that does NOT have the same name as the one that the Perl script is calling, for example:
speedtest.pl compiled into myspeedtest.exe, which calls speedtest.exe
See the discussion at Is `command -v` option required in a POSIX shell? Is posh compliant with POSIX?. It describes that type as well as command -v option is optional in POSIX.1-2004.
The answer marked correct at Check if a program exists from a Bash script doesn't help either. Just like type, hash is also marked as XSI in POSIX.1-2004. See http://pubs.opengroup.org/onlinepubs/009695399/utilities/hash.html.
Then what would be a POSIX compliant way to write a shell script to find if a command exists on the system or not?
How do you want to go about it? You can look for the command on directories in the current value of $PATH; you could look in the directories specified by default for the system PATH (getconf PATH as long as getconf
exists on PATH).
Which implementation language are you going to use? (For example: I have a Perl implementation that does a decent job finding executables on $PATH β but Perl is not part of POSIX; is it remotely relevant to you?)
Why not simply try running it? If you're going to deal with Busybox-based systems, lots of the executables can't be found by searching β they're built into the shell. The major caveat is if a command does something dangerous when run with no arguments β but very few POSIX commands, if any, do that. You might also need to determine what command exit statuses indicate that the command is not found versus the command objecting to not being called with appropriate arguments. And there's little guarantee that all systems will be consistent on that. It's a fraught process, in case you hadn't gathered.
Perl implementation pathfile
#!/usr/bin/env perl
#
# #(#)$Id: pathfile.pl,v 3.4 2015/10/16 19:39:23 jleffler Exp $
#
# Which command is executed
# Loosely based on 'which' from Kernighan & Pike "The UNIX Programming Environment"
#use v5.10.0; # Uses // defined-or operator; not in Perl 5.8.x
use strict;
use warnings;
use Getopt::Std;
use Cwd 'realpath';
use File::Basename;
my $arg0 = basename($0, '.pl');
my $usestr = "Usage: $arg0 [-AafhqrsVwx] [-p path] command ...\n";
my $hlpstr = <<EOS;
-A Absolute pathname (determined by realpath)
-a Print all possible matches
-f Print names of files (as opposed to symlinks, directories, etc)
-h Print this help message and exit
-q Quiet mode (don't print messages about files not found)
-r Print names of files that are readable
-s Print names of files that are not empty
-V Print version information and exit
-w Print names of files that are writable
-x Print names of files that are executable
-p path Use PATH
EOS
sub usage
{
print STDERR $usestr;
exit 1;
}
sub help
{
print $usestr;
print $hlpstr;
exit 0;
}
sub version
{
my $version = 'PATHFILE Version $Revision: 3.4 $ ($Date: 2015/10/16 19:39:23 $)';
# Beware of RCS hacking at RCS keywords!
# Convert date field to ISO 8601 (ISO 9075) notation
$version =~ s%\$(Date:) (\d\d\d\d)/(\d\d)/(\d\d) (\d\d:\d\d:\d\d) \$%\$$1 $2-$3-$4 $5 \$%go;
# Remove keywords
$version =~ s/\$([A-Z][a-z]+|RCSfile): ([^\$]+) \$/$2/go;
print "$version\n";
exit 0;
}
my %opts;
usage unless getopts('AafhqrsVwxp:', \%opts);
version if ($opts{V});
help if ($opts{h});
usage unless scalar(#ARGV);
# Establish test and generate test subroutine.
my $chk = 0;
my $test = "-x";
my $optlist = "";
foreach my $opt ('f', 'r', 's', 'w', 'x')
{
if ($opts{$opt})
{
$chk++;
$test = "-$opt";
$optlist .= " -$opt";
}
}
if ($chk > 1)
{
$optlist =~ s/^ //;
$optlist =~ s/ /, /g;
print STDERR "$arg0: mutually exclusive arguments ($optlist) given\n";
usage;
}
my $chk_ref = eval "sub { my(\$cmd) = \#_; return -f \$cmd && $test \$cmd; }";
my #PATHDIRS;
my %pathdirs;
my $path = defined($opts{p}) ? $opts{p} : $ENV{PATH};
#foreach my $element (split /:/, $opts{p} // $ENV{PATH})
foreach my $element (split /:/, $path)
{
$element = "." if $element eq "";
push #PATHDIRS, $element if $pathdirs{$element}++ == 0;
}
my $estat = 0;
CMD:
foreach my $cmd (#ARGV)
{
if ($cmd =~ m%/%)
{
if (&$chk_ref($cmd))
{
print "$cmd\n" unless $opts{q};
next CMD;
}
print STDERR "$arg0: $cmd: not found\n" unless $opts{q};
$estat = 1;
}
else
{
my $found = 0;
foreach my $directory (#PATHDIRS)
{
my $file = "$directory/$cmd";
if (&$chk_ref($file))
{
$file = realpath($file) if $opts{A};
print "$file\n" unless $opts{q};
next CMD unless defined($opts{a});
$found = 1;
}
}
print STDERR "$arg0: $cmd: not found\n" unless $found || $opts{q};
$estat = 1;
}
}
exit $estat;
How do I make a Bash script that will copy all links (non-download website). The function is only to get all the links and then save it in a txt file.
I've tried this code:
wget --spider --force-html -r -l1 http://somesite.com | grep 'Saving to:'
Example: there are download links within a website (for example, dlink.com), so I just want to copy all words that contain dlink.com and save it into a txt file.
I've searched around using Google, and I found none of it useful.
Using a proper parser in Perl:
#!/usr/bin/env perl -w
use strict;
use LWP::UserAgent;
use HTML::LinkExtor;
use URI::URL;
my $ua = LWP::UserAgent->new;
my ($url, $f, $p, $res);
if(#ARGV) {
$url = $ARGV[0]; }
else {
print "Enter an URL : ";
$url = <>;
chomp($url);
}
my #array = ();
sub callback {
my($tag, %attr) = #_;
return if $tag ne 'a'; # we only look closer at <a href ...>
push(#array, values %attr) if $attr{href} =~ /dlink\.com/i;
}
# Make the parser. Unfortunately, we donβt know the base yet
# (it might be diffent from $url)
$p = HTML::LinkExtor->new(\&callback);
# Request document and parse it as it arrives
$res = $ua->request(HTTP::Request->new(GET => $url),
sub {$p->parse($_[0])});
# Expand all URLs to absolute ones
my $base = $res->base;
#array = map { $_ = url($_, $base)->abs; } #array;
# Print them out
print join("\n", #array), "\n";
Setup and Background
I am working on script that needs to run as /usr/bin/php-cgi instead /usr/local/bin/php and I'm having trouble checking for stdin
If I use /usr/local/bin/php as the interpreter I can do something like
if defined('STDIN'){ ... }
This doesn't seem to work with php-cgi - Looks to always be undefined. I checked the man page for php-cgi but didn't find it very helpful. Also, if I understand it correctly, the STDIN constant is a file handle for php://stdin. I read somewhere that constant is not supposed to be available in php-cgi
Requirements
The shebang needs to be #!/usr/bin/php-cgi -q
The script will sometimes be passed arguments
The script will sometimes receive input via STDIN
Current Script
#!/usr/bin/php-cgi -q
<?php
$stdin = '';
$fh = fopen('php://stdin', 'r');
if($fh)
{
while ($line = fgets( $fh )) {
$stdin .= $line;
}
fclose($fh);
}
echo $stdin;
Problematic Behavior
This works OK:
$ echo hello | ./myscript.php
hello
This just hangs:
./myscript.php
These things don't work for me:
Checking defined('STDIN') // always returns false
Looking to see if CONTENT_LENGTH is defined
Checking variables and constants
I have added this to the script and run it both ways:
print_r(get_defined_constants());
print_r($GLOBALS);
print_r($_COOKIE);
print_r($_ENV);
print_r($_FILES);
print_r($_GET);
print_r($_POST);
print_r($_REQUEST);
print_r($_SERVER);
echo shell_exec('printenv');
I then diff'ed the output and it is the same.
I don't know any other way to check for / get stdin via php-cgi without locking up the script if it does not exist.
/usr/bin/php-cgi -v yields: PHP 5.4.17 (cgi-fcgi)
You can use the select function such as:
$stdin = '';
$fh = fopen('php://stdin', 'r');
$read = array($fh);
$write = NULL;
$except = NULL;
if ( stream_select( $read, $write, $except, 0 ) === 1 ) {
while ($line = fgets( $fh )) {
$stdin .= $line;
}
}
fclose($fh);
Regarding your specific problem of hanging when there is no input: php stream reads are blocking operations by default. You can change that behavior with stream_set_blocking(). Like so:
$fh = fopen('php://stdin', 'r');
stream_set_blocking($fh, false);
$stdin = fgets($fh);
echo "stdin: '$stdin'"; // immediately returns "stdin: ''"
Note that this solution does not work with that magic file handle STDIN.
stream_get_meta_data helped me :)
And as mentioned in the previous answer by Seth Battin stream_set_blocking($fh, false); works very well π
The next code reads data from the command line if provided and skips when it's not.
For example:
echo "x" | php render.php
and php render.php
In the first case, I provide some data from another stream (I really need to see the changed files from git, something like git status | php render.php.
Here is an example of my solution which works:
$input = [];
$fp = fopen('php://stdin', 'r+');
$info = stream_get_meta_data($fp);
if (!$info['seekable'] && $fp) {
while (false !== ($line = fgets($fp))) {
$input[] = trim($line);
}
fclose($fp);
}
The problem is that you create a endless loop with the while($line = fgets($fh)) part in your code.
$stdin = '';
$fh = fopen('php://stdin','r');
if($fh) {
// read *one* line from stdin upto "\r\n"
$stdin = fgets($fh);
fclose($fh);
}
echo $stdin;
The above would work if you're passing arguments like echo foo=bar | ./myscript.php and will read a single line when you call it like ./myscript.php
If you like to read more lines and keep your original code you can send a quit signal CTRL + D
To get parameters passed like ./myscript.php foo=bar you could check the contents of the $argv variable, in which the first argument always is the name of the executing script:
./myscript.php foo=bar
// File: myscript.php
$stdin = '';
for($i = 1; $i < count($argv); i++) {
$stdin .= $argv[$i];
}
I'm not sure that this solves anything but perhaps it give you some ideas.
What am I doing? The script loads a string from a .txt (locations.txt), and separates it into 6 variables. Each variable is separated by a comma. Then I go to a website, whose address depends on these 6 values.
What is the problem? If there is a space as a character in a variable as part of a string in locations.txt. When there is a space, it does not get the correct url.
The input file is:
locations.txt = Heinz,Weber,Sierra Leone,1915,M,White
Because Sierra Leone has a space, the url is:
https://familysearch.org/search/collection/results#count=20&query=%2Bgivenname%3AHeinz%20%2Bsurname%3AWeber%20%2Bbirth_place%3A%22Sierra%20Leone%22%20%2Bbirth_year%3A1914-1918~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219
But that does not get processed correctly in the code below.
I'm using the packages:
use strict;
use warnings;
use WWW::Mechanize::Firefox;
use HTML::TableExtract;
use Data::Dumper;
use LWP::UserAgent;
use JSON;
use CGI qw/escape/;
use HTML::DOM;
This is the beginning of the code :
open(my $l, 'locations26.txt') or die "Can't open locations: $!";
open(my $o, '>', 'out2.txt') or die "Can't open output file: $!";
while (my $line = <$l>) {
chomp $line;
my %args;
#args{qw/givenname surname birth_place birth_year gender race/} = split /,/, $line;
$args{birth_year} = ($args{birth_year} - 2) . '-' . ($args{birth_year} + 2);
my $mech = WWW::Mechanize::Firefox->new(create => 1, activate => 1);
$mech->get("https://familysearch.org/search/collection/results#count=20&query=%2Bgivenname%3A".$args{givenname}."%20%2Bsurname%3A".$args{surname}."%20%2Bbirth_place%3A".$args{birth_place}."%20%2Bbirth_year%3A".$args{birth_year}."~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219");
# REST OF THE SCRIPT HERE. MANY LINES.
}
As another example, the following would work:
locations.txt = Benjamin,Schuvlein,Germany,1913,M,White
I have not used Mechanize, so not sure whether you need to encode the URL. Try encoding space to %20 or + before running $mech->get
$url =~ s/ /+/g;
Or
$url =~ s/ /%20/g
whichever works :)
====
Edit:
my $url = "https://familysearch.org/search/collection/results#count=20& query=%2Bgivenname%3A".$args{givenname}."%20%2Bsurname%3A".$args{surname}."%20%2Bbirth_place%3A".$args{birth_place}."%20%2Bbirth_year%3A".$args{birth_year}."~%20%2Bgender%3AM%20%2Brace%3AWhite&collection_id=2000219";
$url =~ s/ /+/g;
$mech->get($url);
Try that.
If you have the error
Global symbol "$url" requires explicit package name.
this means that you forgot to declare $url with :
my $url;
Your use part seems freaky, I'm pretty sure that you don't need all of those modules # the same time. If you use WWW::Mechanize, no need LWP::UserAgent and CGI I guess...