PAR packer is not including user defined modules - perl-module

I have a Perl script test.pl which includes another Perl module fact.pm which is under crypt/Module dir.
the source code of crypt/test.pl is:
#!/usr/bin/perl
use Term::ANSIColor qw(:constants);
use File::Path;
use POSIX qw(strftime);
use File::Basename qw(dirname);
use Cwd qw(abs_path);
use lib dirname(dirname abs_path $0);
use crypt::Module::fact qw(factorial);
&factorial();#function present in fact.pm
print("Thanks for that thought \n");
The PAR packer command given is
pp -M Module::fact -o test test.pl
on copying just the executable test on a different directory path and executing it I am getting the below error:
Can't locate crypt/Module/fact.pm in #INC (you may need to install the crypt::Module::fact module)
how is it possible to include the module in the executable?

First, I'd recommend using the -c and/or -x options for the pp utility, which are used "to determine additonal run-time dependencies". I've gotten into the habit of using both.
Although you are using the -M option to add a module, I think that you have a typo with that option. In your code, you are using a "crypt::Module::fact" module, but you are specifying a "Module::fact" module with the -M option. Perhaps if you used "-M crypt::Module::fact" instead of "-M Module::fact", your problem may be solved.
Also, you might need to use -I (or -lib) to specify the path to any additional module library directories.

Related

How to get the path of initialization or script files in SWI-Prolog

If I run the prolog scripts like this
user:/home/user$ swipl -f some/path/init.pl -s another/dir/script.pl
How can I get the relative or full paths: some/path and another/dir or /home/user/some/path and /home/user/another/dir?
I need it to locate other files that are let's say in another/dir/xml.
working_directory/2 does'nt do the job.
Neither source_file/1 does.
For example, for perl there is FindBin module; also for python there are options.

Coding a relative path to file in OS X [duplicate]

I have a Haskell script that runs via a shebang line making use of the runhaskell utility. E.g...
#! /usr/bin/env runhaskell
module Main where
main = do { ... }
Now, I'd like to be able to determine the directory in which that script resides from within the script, itself. So, if the script lives in /home/me/my-haskell-app/script.hs, I should be able to run it from anywhere, using a relative or absolute path, and it should know it's located in the /home/me/my-haskell-app/ directory.
I thought the functionality available in the System.Environment module might be able to help, but it fell a little short. getProgName did not seem to provide useful file-path information. I found that the environment variable _ (that's an underscore) would sometimes contain the path to the script, as it was invoked; however, as soon as the script is invoked via some other program or parent script, that environment variable seems to lose its value (and I am needing to invoke my Haskell script from another, parent application).
Also useful-to-know would be whether I can determine the directory in which a pre-compiled Haskell executable lives, using the same technique or otherwise.
As I understand it, this is historically tricky in *nix. There are libraries for some languages to provide this behavior, including FindBin for Haskell:
http://hackage.haskell.org/package/FindBin
I'm not sure what this will report with a script though. Probably the location of the binary that runhaskell compiled just prior to executing it.
Also, for compiled Haskell projects, the Cabal build system provides data-dir and data-files and the corresponding generated Paths_<yourproject>.hs for locating installed files for your project at runtime.
http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/authors.html#paths-module
There is a FindBin package which seems to suit your needs and it also works for compiled programs.
For compiled executables, In GHC 7.6 or later you can use System.Environment.getExecutablePath.
getExecutablePath :: IO FilePathSource
Returns the absolute pathname of the current executable.
Note that for scripts and interactive sessions, this is the path to the
interpreter (e.g. ghci.)
There is executable-path which worked with my runghc script. FindBin didn't work for me as it returned my current directory instead of the script dir.
I could not find a way to determine script path from Haskell (which is a real pity IMHO). However, as a workaround, you can wrap your Haskell script inside a shell script:
#!/bin/sh
SCRIPT_DIR=`dirname $0`
runhaskell <<EOF
main = putStrLn "My script is in \"$SCRIPT_DIR\""
EOF

Xeon Phi cannot execute binary file

I am trying to execute a binary file on a xeon phi coprocessor, and it is coming back with "bash: cannot execute binary file". So I am trying to find how to either view an error log or have it display what's happening when I tell it to execute that is causing it not work. I have already tried bash --verbose but it didn't display any additional information. Any ideas?
You don't specify where you compiled your executable nor where you tried to execute from.
To compile a program on the host system to be executed directly on the coprocessor, you must either:
if using one of the Intel compilers, add -mmic to the compiler
command line
if using gcc, use the cross-compilers provided with the MPSS
(/usr/linux-k1om-4.7) - note, however, that the gcc compiler does not
take advantage of vectorization on the coprocessor
If you want to compile directly on the coprocessor, you can install the necessary files from the additional rpm files provided for the coprocessor (found in mpss-/k1om) using the directions from the MPSS user's guide for installing additional rpm files.
To run a program on the coprocessor, if you have compiled it on the host, you must either:
copy your executable file and required libraries to the coprocessor
using scp before you ssh to the coprocessor yourself to execute the
code.
use the micnativeloadex command on the host - you can find a man page
for that on the host.
If you are writing a program using the offload model (part of the work is done using the host then some of the work is passed off to the coprocessor), you can compile on the host using the Intel compilers with no special options.
Note, however, that, regardless of what method you use, any libraries to be used with an executable for the coprocessor will need themselves to be built for the coprocessor. The default libraries exist but any libraries you add, you need to build a version for the coprocessor in addition to any version you make for the host system.
I highly recommend the articles you will find under https://software.intel.com/en-us/articles/programming-and-compiling-for-intel-many-integrated-core-architecture. These articles are written by people who develop and/or support the various programming tools for the coprocessor and should answer most of your questions.
Update: What's below does NOT answer the OP's question - it is one possible explanation for the cannot execute binary file error, but the fact that the error message is prefixed with bash: indicates that the binary is being invoked correctly (by bash), but is not compatible with the executing platform (compiled for a different architecture) - as #Barmar has already stated in a comment.
Thus, while the following contains some (hopefully still somewhat useful) general information, it does not address the OP's problem.
One possible reason for cannot execute binary file is to mistakenly pass a binary (executable) file -- rather than a shell script (text file containing shell code) -- as an operand (filename argument) to bash.
The following demonstrates the problem:
bash printf # fails with '/usr/bin/printf: /usr/bin/printf: cannot execute binary file'
Note how the mistakenly passed binary's path prefixes the error message twice; If the first prefix says bash: instead, the cause is most likely not a problem of incorrect invocation, but one of trying to a invoke an incompatible binary (compiled for a different architecture).
If you want bash to invoke a binary, you must use the -c option to pass it, which allows you to specify an entire command line; i.e., the binary plus arguments; e.g.:
bash -c '/usr/bin/printf "%s\n" "hello"' # -> 'hello'
If you pass a mere binary filename instead of a full path - e.g., -c 'program ...' - then a binary by that name must exist in one of the directories listed in the $PATH variable that bash sees, otherwise you'll get a command not found error.
If, by contrast, the binary is located in the current directory, you must prefix the filename with ./ for bash to find it; e.g. -c './program ...'

Ruby equivalent of .irbrc?

While irb utilizes .irbrc to automatically perform certain actions upon start, I have not been able to find how to do the same automatically for invocations of ruby itself. Any suggestions where the documentation for such can be found would be greatly appreciated.
For environments where I need this (essentially never) I've used the -r [filename] option, and the RUBYOPT environment variable.
(You may want to specify include directories, which can be done a variety of ways, including the -I [directory] option).
This is essentially the same answer as Phrogz, but without the shell script. The scripts are a bit more versatile since you can have any number of them for trivial pre-execution environment rigging.
Just as you can use ruby -rfoo to require library foo for that run, so you can specify to always require a particular library for every Ruby run:
if [ -f "$HOME/.ruby/lib/mine.rb" ]; then
RUBYLIB="$HOME/.ruby/lib"
RUBYOPT="rmine"
export RUBYLIB RUBYOPT
fi
Put your own custom code in a file (like mine.rb above) and get your interpreter to always add its directory to your $LOAD_PATH (aka $:) and always require it (which runs the code therein).
Shell code above and background information here:
http://tbaggery.com/2007/02/11/auto-loading-ruby-code.html

How to check availability of Perl, its version and presence of a required module?

I have written a Perl script, I just want to give it to every one,
for that I planned to write a bash script which is used to test the environment of a user and find whether that environment is capable of running the Perl script.
I want to test the things like:
Whether Perl has installed in that system
Perl should have the version 5 or more
Whether the module JSON::Any is available
Any suggestion would greatly appreciated :-)
No, do not write a shell script. Perl already has a perfectly fine way of doing this. The correct way to do this is to build a CPAN-ready distribution using the normal toolchain. Some of this is explained in perlnewmod, perlmodstyle and perlmodinstall.
For a minimal working example, create a directory layout thus:
.
├── Build.PL
├── README
└── script
└── abuscript.pl
In the Build.PL file, put:
use 5.000;
use Module::Build qw();
Module::Build->new(
module_name => 'abuscript',
dist_version => '1.000',
dist_author => 'abubacker <abubacker#example.com>',
dist_abstract => 'describe what the script does in one sentence',
configure_requires => {
'perl' => '5.000',
},
requires => {
'JSON::Any' => 0,
},
)->create_build_script;
Change the details to suite your purposes.
In the README file, put some installation instructions, for instance:
To install this module, run the following commands:
perl Build.PL
./Build install
Once you're done with all that, you run:
perl Build.PL
./Build manifest
./Build dist
This will result in a .tar.gz archive which you will distribute. Tell your users to install it like any other CPAN module, or if they don't know what that means, they should read the README.
If you have time, I recommend converting your script to a module. The program pl2pm (comes with Perl) and the CPAN module Module-Starter-PBP help you.
If license permits, it is possible to upload your code to CPAN to make it even more convenient for your users. Ask for help in any of the following places first: mailing list module-authors#perl.org, web forum PerlMonks, IRC channel #toolchain on MagNET (irc://irc.perl.org/toolchain)
Regarding checking Perl availability the easiest way to do it is to check the return code (exit code) of the command perl -v,if this is not 0, you do not have Perl.
Now regarding Perl requirements, you should deal with them from inside your Perl script:
#!/usr/bin/env perl
use 5.006_001;
use ModuleName 2.0;
The above Perl code will run only with perl 5.6.1 or newer and with modele "ModuleName" version 2.0 or newer. There is no need to manually check the Perl version from bash, it is better and easier to do it directly from the Perl script.
References:
Requiring a minimal Perl version
Checking if a Perl module is installed
if perl -MJSON::Any -e 'print "$JSON::Any::VERSION\n"' >/dev/null 2>&1
then : OK
else echo "Cannot find a perl with JSON::Any installed" 1>&2
exit 1
fi
I often use '${PERL:-perl}' and similar constructs to identify the command (for awk vs nawk or gawk; troff vs groff; etc).
If you want to test the version of JSON::Any, capture the output from the command instead.
If you want to test the version of Perl, add 'use 5.008009;' or whatever number you think is sensible. (It wasn't so long ago that they finally removed Perl 4 from one of the NFS-mounted file systems at work - but that was not the only Perl on the machine - at least, not in the last decade or more!)

Resources