Ebuild example for project in Go - go

I intend to create a Gentoo ebuild for a project written in Go, and I'm wondering whether this has been done before.
As building and installing a Go project from source seems sufficiently different compared to projects in other programming languages, it would be useful for me to compare to an existing ebuild for figuring out best practices.
However, in the current portage tree I can't find any package that would depend on "dev-lang/go". Is there such an ebuild, perhaps in an overlay?

How about go-overlay to look for example ebuilds? They wrote a special ebuild class for building Go apllications and libraries in a few lines. Let me use their dev-util/flint-0.0.4 ebuild as an illustration (all comments are mine):
EAPI=6
GOLANG_PKG_IMPORTPATH="github.com/pengwynn"
GOLANG_PKG_VERSION="c3a5d8d9a2e04296fba560d9a22f763cff68eb75"
# Many Go projects don't pin versions of their dependencies,
# so it may has to be done here. You might not need this step if
# the upstream already uses 'godep' or simular tool.
GOLANG_PKG_DEPENDENCIES=(
"github.com/codegangsta/cli:142e6cd241"
"github.com/fatih/color:1b35f289c4"
"github.com/octokit/go-octokit:4408b5393e"
"github.com/fhs/go-netrc:4422b68c9c"
"github.com/jingweno/go-sawyer:1999ae5763"
"github.com/shiena/ansicolor:264b056680"
"github.com/jtacoma/uritemplates:0a85813eca"
)
# Since many projects don't require custom build steps,
# this single line may be enough.
inherit golang-single
# Nothing special about these variables.
DESCRIPTION="Check your project for common sources of contributor friction"
HOMEPAGE="https://${GOLANG_PKG_IMPORTPATH}/${PN}"
LICENSE="MIT"
KEYWORDS="amd64 x86 arm"
# Prevent simulateneous installing with 'dev-go/flint'.
# Honestly, I was unable to this package on the Internet.
SLOT="0"
DEPEND="!dev-go/${PN}"

Here is a working example of an ebuild which installs a go project:
https://github.com/timboudreau/gentoo/blob/master/net-misc/syncthing/syncthing-0.11.7.ebuild

It is possible. I just made one in my overlay. It was a little bit painful, but it works.
There are a few important things, that have to be done.
Create golang eclasses in your repository, if you don't have go-overlay added in your system.
GOLANG_PKG_IMPORTPATH variable specifies a GitHub profile, from which will be downloaded source code.
GOLANG_PKG_DEPENDENCIES variable specifies GitHub repositories and particular commits of all dependencies.
inherit golang-single imports mentioned eclass and at the same time adds dev-lang/go into dependencies.

It looks like there's an existing, working ebuild.
From: https://gist.github.com/matsuu/233858 (and also found at http://git.overlays.gentoo.org/gitweb/?p=proj/glentoo-overlay.git;a=blob_plain;f=dev-lang/golang-platform/golang-platform-9999.ebuild;hb=HEAD)
# Copyright 1999-2009 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
# $Header: $
EAPI="2"
inherit elisp-common eutils mercurial toolchain-funcs
DESCRIPTION="The Go Programming Language"
HOMEPAGE="http://golang.org/"
SRC_URI=""
EHG_REPO_URI="https://go.googlecode.com/hg/"
EHG_REVISION="release"
LICENSE="BSD"
SLOT="0"
KEYWORDS="~amd64 ~x86"
IUSE="emacs vim-syntax"
RESTRICT="test"
RDEPEND="sys-devel/gcc"
DEPEND="${RDEPEND}
emacs? ( virtual/emacs )
sys-devel/bison
sys-apps/ed"
S="${WORKDIR}/hg"
ENVFILE="${WORKDIR}/50${PN}"
src_prepare() {
GOBIN="${WORKDIR}/bin"
mkdir -p "${GOBIN}" || die
sed -i \
-e "/^GOBIN=/s:=.*:=${GOBIN}:" \
-e "/MAKEFLAGS=/s:=.*:=${MAKEOPTS}:" \
src/make.bash || die
sed -i \
-e "/^CFLAGS=/s:-O2:${CFLAGS}:" \
src/Make.conf || die
case ${ARCH} in
x86)
GOARCH="386"
;;
*)
GOARCH="${ARCH}"
;;
esac
case ${CHOST} in
*-darwin*)
GOOS="darwin"
;;
*)
GOOS="linux"
;;
esac
# *-nacl*)
# GOOS="nacl"
# ;;
cat > "${ENVFILE}" <<EOF
GOROOT="/usr/$(get_libdir)/${PN}"
GOARCH="${GOARCH}"
GOOS="${GOOS}"
EOF
. "${ENVFILE}"
export GOBIN GOROOT GOARCH GOOS
}
src_compile() {
cd src
PATH="${GOBIN}:${PATH}" GOROOT="${S}" CC="$(tc-getCC)" ./make.bash || die
if use emacs ; then
elisp-compile "${S}"/misc/emacs/*.el || die
fi
}
src_test() {
cd src
PATH="${GOBIN}:${PATH}" GOROOT="${S}" CC="$(tc-getCC)" ./run.bash || die
}
src_install() {
dobin "${GOBIN}"/* || die
insinto "${GOROOT}"
doins -r pkg || die
if use emacs ; then
elisp-install ${PN} "${S}"/misc/emacs/*.el* || die "elisp-install failed"
fi
if use vim-syntax ; then
insinto /usr/share/vim/vimfiles/plugin
doins "${S}"/misc/vim/go.vim || die
fi
doenvd "${ENVFILE}" || die
dodoc AUTHORS CONTRIBUTORS README || die
dohtml -r doc/* || die
}
pkg_postinst() {
elog "please don't forget to source /etc/profile"
}
Sorry, I haven't tested it as I don't have a running Gentoo instance right now. Hope it works.

Related

How setup Visual Studio Code for Run/Debug of F# projects/scripts?

I have tried to use Visual Studio Code for run a simple F# script.
I download all recent versions as today. I install all the plugins at http://ionide.io/. Despite the nice animated gifs that show that it works, I'm unable to see how make to work the Build of code.
I create a .ionide file:
[Fake]
linuxPrefix = "mono"
command = "build.cmd"
build = "build.fsx"
But then, how install Fake? So, I do this from xamarin and install it. Ok, so now I get the build.fsx:
#r "packages/FAKE.4.12.0/tools/FakeLib.dll" // include Fake lib
RestorePackages()
// Properties
let buildDir = "./build/"
let testDir = "./test/"
let deployDir = "./deploy/"
// version info
let version = "0.2" // or retrieve from CI server
// Targets
Target "Clean" (fun _ ->
CleanDirs [buildDir; testDir; deployDir]
)
Target "fakeBuild" (fun _ ->
!! "./*.fsproj"
|> MSBuildRelease buildDir "Build"
|> Log "AppBuild-Output: "
)
Target "Default" (fun _ ->
trace "Hello World from FAKE"
)
// Dependencies
"Clean"
==> "fakeBuild"
==> "Default"
// start build
RunTargetOrDefault "Default"
Run the Fake:Build command and get:
No handler found for the command: 'fake.fakeBuild'. Ensure there is an activation event defined, if you are an extension.
And now get lost.
install yeoman: ">ext install yeoman"
then setup a stand alone project with >yo
and follow the instructions and say yes to paket and FAKE.
then >paket init
and >paket install and it should work.
to get the > use ctrl+shift+p
For the Atom IDE you also have to install the yeoman npm package which I describe here: http://www.implementingeventsourcingwithfsharp.com/?p=61
how to install the package is descibed here: https://www.npmjs.com/package/generator-fsharp
not sure you need it for Visual Studio Code
Hope this helps
the usual way of doing this is to have a bash script that calls your F# script. Your bash script should look like something like:
#!/bin/bash
if test "$OS" = "Windows_NT"
then # For Windows
.paket/paket.bootstrapper.exe
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
.paket/paket.exe restore
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
packages/FAKE/tools/FAKE.exe $# --fsiargs build.fsx
else #For non Windows
mono .paket/paket.bootstrapper.exe
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
mono .paket/paket.exe restore
exit_code=$?
if [ $exit_code -ne 0 ]; then
exit $exit_code
fi
mono packages/FAKE/tools/FAKE.exe $# --fsiargs build.fsx
fi
Now, you can define your build steps in your build.fsx script
#r "packages/FAKE/tools/FakeLib.dll"
open Fake
// Targets
// Dependencies
// Default target
Hope it helps.
I got it working.
That said, I'm almost as lost as you, the available documentation is not very complete IMO. Here's what you have to do (you tagged osx-elcapitan so I'm assuming OS X):
get rid of the .ionide file, you only need it if you don't want to use the defaults. Lets stick to the defaults for now to keep things simple.
make sure the path to FakeLib.dll is correct in your build.fsx
Create a file named build.sh with the following script (make sure the path to FAKE.exe is right):
mono packages/FAKE.4.12.0/tools/FAKE.exe build.fsx $#
If it fails again, post the output error (click the OPEN button at the top for the FAKE command)
PS: Your question is two months old so I apologize if you know all this already.

How to test a Makefile for missing dependencies?

Is there a way to test for missing dependencies that shows up when compiling a project with multiple jobs (-jN where N > 1)?
I often encounter packages, mostly open source, where the build process works fine as long as I use -j1, or -jN where N is a relatively low value such as 4 or 8 but if I used higher values likes 48, a little uncommon, it starts to fail due to missing dependencies.
I attempted to build myself a bash script that would, given a target, figure out all the dependencies and try to build explicitly each of those dependency with -j1 in order to validate that none are missing dependencies on their own. It appears to work with small / medium package but fails on more important one like uClibc for example.
I am sharing my script in here, as some people may understand better what I mean by reading code. I also hope that a more robust solution exists and could be shared back.
#!/bin/bash
TARGETS=$*
echo "TARGETS=$TARGETS"
for target in $TARGETS
do
MAKE="make"
RULE=`make -j1 -n -p | grep "^$target:"`
if [ -z "$RULE" ]; then
continue
fi
NEWTARGETS=${RULE#* }
if [ -z "$NEWTARGETS" ]; then
continue
fi
if [ "${NEWTARGETS}" = "${RULE}" ]; then
# leaf target, we do not want to test.
continue
fi
echo "RULE=$RULE"
# echo "NEWTARGETS=$NEWTARGETS"
$0 $NEWTARGETS
if [ $? -ne 0 ]; then
exit 1
fi
echo "Testing target $target"
make clean && make -j1 $target
if [ $? -ne 0 ]; then
echo "Make parallel will fail with target $target"
exit 1
fi
done
I'm not aware of any open source solution, but this is exactly the problem that ElectricAccelerator, a high-performance implementation of GNU make, was created to solve. It will execute the build in parallel and dynamically detect and correct for missing dependencies, so that the build output is the same as if it had been run serially. It can produce an annotated build log which includes details about the missing dependencies. For example, this simple makefile has an undeclared dependency between abc and def:
all: abc def
abc:
echo PASS > abc
def:
cat abc
Run this with emake instead of gmake and enable --emake-annodetail=history, and the resulting annotation file includes this:
<job id="Jf42015c0" thread="f4bfeb40" start="5" end="6" type="rule" name="def" file="Makefile" line="6" neededby="Jf42015f8">
<command line="7">
<argv>cat abc</argv>
<output>cat abc
</output>
<output src="prog">PASS
</output>
</command>
<depList>
<dep writejob="Jf4201588" file="/tmp/foo/abc"/>
</depList>
<timing invoked="0.356803" completed="0.362634" node="chester-1"/>
</job>
In particular the <depList> section shows that this job, Jf42015c0 (in other words, def), depends on job Jf4201588, because the latter modified the file /tmp/foo/abc.
You can give it a try for free with ElectricAccelerator Huddle.
(Disclaimer: I'm the architect of ElectricAccelerator)

Disable auto-completion of remote branches in Git Bash?

I'm working on a fairly large git repo with a couple of thousand (remote) branches. I am used to using auto-completion (using [TAB]) in the console (Git Bash in that case), so I unconsciously do that for git commands, too.
e.g. I'd type
git checkout task[TAB]
with the effect that the console stalls for often minutes. Is there a way to limit auto-completion to local branches only?
With Git 2.13 (Q2 2017), you can disable (some of) the branch completion.
git checkout --no-guess ...
# or:
export GIT_COMPLETION_CHECKOUT_NO_GUESS=1
See commit 60e71bb (21 Apr 2017) by Jeff King (peff).
(Merged by Junio C Hamano -- gitster -- in commit b439747, 01 May 2017)
As documented in contrib/completion/git-completion.bash now:
You can set the following environment variables to influence the behavior of the completion routines:
GIT_COMPLETION_CHECKOUT_NO_GUESS
When set to "1", do not include "DWIM" suggestions in git-checkout
completion (e.g., completing "foo" when "origin/foo" exists).
Note: DWIM is short for Do What I Mean, where a system attempts to anticipate what users intend to do, correcting trivial errors automatically rather than blindly executing users' explicit but potentially incorrect inputs.
completion: optionally disable checkout DWIM
When we complete branch names for "git checkout", we also complete remote branch names that could trigger the DWIM behavior. Depending on your workflow and project, this can be either convenient or annoying.
For instance, my clone of gitster.git contains 74 local "jk/*" branches, but origin contains another 147.
When I want to checkout a local branch but can't quite remember the name, tab completion shows me 251 entries. And worse, for a topic that has been picked up for pu, the upstream branch name is likely to be similar to mine, leading to a high probability that I pick the wrong one and accidentally create a new branch.
Note: "picked up for pu": see a What's cooking in git.git: it starts with:
Commits prefixed with '-' are only in 'pu' (proposed updates) while commits prefixed with '+' are in 'next'.
This is part of the Git Workflow Graduation process.
pu (proposed updates) is an integration branch for things that are not quite ready for inclusion yet
This patch adds a way for the user to tell the completion
code not to include DWIM suggestions for checkout.
This can already be done by typing:
git checkout --no-guess jk/<TAB>
but that's rather cumbersome.
The downside, of course, is that you no longer get completion support when you do want to invoke the DWIM behavior.
But depending on your workflow, that may not be a big loss (for instance, in git.git I am much more likely to want to detach, so I'd type "git checkout origin/jk/<TAB>" anyway).
I'm assuming that you are using the git-completion.bash script, and that you only care about git checkout.
To accomplish this, I just changed one line in the definition of the _git_checkout () function in git-completion.bash:
< __gitcomp_nl "$(__git_refs '' $track)"
---
> __gitcomp_nl "$(__git_heads '' $track)"
My understanding is that this only affects the tab-completion action (because of its location within the * case of the switch-case statement).
If you installed git-completion via homebrew, it's located here:
/usr/local/etc/bash_completion.d/git-completion.bash
Following erik.weathers' answer above, I made the following change so autocompletion can work for both local and remote based on the current prefix. By default, it'll only search local, but if I specify origin/… it'll know I want to search remote branches too.
In the _git_checkout () method, change
__gitcomp_nl "$(__git_refs '' $track)"
to:
# only search local branches instead of remote branches if origin isn't specified
if [[ $cur == "origin/"* ]]; then
__gitcomp_nl "$(__git_refs '' $track)"
else
__gitcomp_nl "$(__git_heads '' $track)"
fi
Of course, you can change origin to something else or you can have it search through through a list of remote prefixes if you have more than 1.
You can hack /etc/bash_completion.d/git
You'll need to edit __git_refs ()
Note that the change in behaviour will apply every where (so even with git push/pull where you might not want it to). You could of course, make a copy of the function or pass an extra parameter, but I leave that to you
You could think that you just the local branches with the alias co and all the branches with the complete command checkout.
You could perform the following. In your .bashrc, you redefine the _git_checkout() function. You let this function unchanged, except the end:
if [ $command -eq "co" ]; then
__gitcomp "$(__git_refs_local '' $track)"
else
__gitcomp "$(__git_refs '' $track)"
fi
Then, you just have to define a new function, __git_refs_local, where you remove the remote stuff.
Carey Metcalfe wrote a blog post containing a solution that also edits the auto-completion function, but with slightly newer code than other answers. He also defines an alias checkoutr that keeps the old auto-complete behavior in case it’s ever needed.
In short, first create the checkoutr alias with this command:
git config --global alias.checkoutr checkout
Then find git-completion.bash, copy the _git_checkout function into your shell’s RC file so that it gets redefined, and inside that function, replace this line:
__git_complete_refs $track_opt
with the following lines:
if [ "$command" = "checkoutr" ]; then
__git_complete_refs $track_opt
else
__gitcomp_direct "$(__git_heads "" "$cur" " ")"
fi
See the blog post for more details and potential updates to the code.
Modifying $(brew --prefix)/etc/bash_completion.d/git-completion.bash is not a good idea because it will be overwritten every time you update Git through Homebrew.
Combining all the answers I overwrite only _git_checkout function from the completion file in my .bash_profile after sourcing the completion file:
_git_checkout ()
{
__git_has_doubledash && return
case "$cur" in
--conflict=*)
__gitcomp "diff3 merge" "" "${cur##--conflict=}"
;;
--*)
__gitcomp "
--quiet --ours --theirs --track --no-track --merge
--conflict= --orphan --patch
"
;;
*)
# check if --track, --no-track, or --no-guess was specified
# if so, disable DWIM mode
local flags="--track --no-track --no-guess" track=1
if [ -n "$(__git_find_on_cmdline "$flags")" ]; then
track=''
fi
# only search local branches instead of remote branches if origin isn't
# specified
if [[ $cur == "origin/"* ]]; then
__gitcomp_nl "$(__git_refs '' $track)"
else
__gitcomp_nl "$(__git_heads '' $track)"
fi
;;
esac
}
I'm not using Git Bash myself, but if this is the same as mentioned in
http://tekrat.com/2008/04/30/bash-autocompletion-git-super-lazy-goodness/, you should be able to replace git branch -a with a plain git branch in
_complete_git() {
if [ -d .git ]; then
branches=`git branch -a | cut -c 3-`
tags=`git tag`
cur="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=( $(compgen -W "${branches} ${tags}" -- ${cur}) )
fi
}
complete -F _complete_git git checkout
(in your .profile or similar) and get what you want.
FWW here is a hack to __git_complete_refs that does the trick
__git_complete_refs ()
{
local remote track pfx cur_="$cur" sfx=" "
while test $# != 0; do
case "$1" in
--remote=*) remote="${1##--remote=}" ;;
--track) track="yes" ;;
--pfx=*) pfx="${1##--pfx=}" ;;
--cur=*) cur_="${1##--cur=}" ;;
--sfx=*) sfx="${1##--sfx=}" ;;
*) return 1 ;;
esac
shift
done
echo cur_ $cur_ > a
if [[ $GIT_COMPLETION_CHECKOUT_NO_GUESS != 1 || $cur_ == "origin"* ]]; then
__gitcomp_direct "$(__git_refs "$remote" "$track" "$pfx" "$cur_" "$sfx")"
else
__gitcomp_direct "$(__git_heads "" "$cur_")"
fi
}

Scripting Xcode documentation with DOxygen problems

I am trying to use the following script by duckrowing (http://www.duckrowing.com/2010/03/18/documenting-objective-c-with-doxygen-part-ii/), to document an existing xcode project.
#
# Build the doxygen documentation for the project and load the docset into Xcode
#
# Created by Fred McCann on 03/16/2010.
# http://www.duckrowing.com
#
# Based on the build script provided by Apple:
# http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
#
# Set the variable $COMPANY_RDOMAIN_PREFIX equal to the reverse domain name of your comany
# Example: com.duckrowing
#
DOXYGEN_PATH=/Applications/Doxygen.app/Contents/Resources/doxygen
DOCSET_PATH=$SOURCE_ROOT/build/$PRODUCT_NAME.docset
if ! [ -f $SOURCE_ROOT/Doxyfile]
then
echo doxygen config file does not exist
$DOXYGEN_PATH -g $SOURCE_ROOT/Doxyfile
fi
# Append the proper input/output directories and docset info to the config file.
# This works even though values are assigned higher up in the file. Easier than sed.
cp $SOURCE_ROOT/Doxyfile $TEMP_DIR/Doxyfile
echo "INPUT = $SOURCE_ROOT" >> $TEMP_DIR/Doxyfile
echo "OUTPUT_DIRECTORY = $DOCSET_PATH" >> $TEMP_DIR/Doxyfile
echo "RECURSIVE = YES" >> $TEMP_DIR/Doxyfile
echo "EXTRACT_ALL = YES" >> $TEMP_DIR/Doxyfile
echo "JAVADOC_AUTOBRIEF = YES" >> $TEMP_DIR/Doxyfile
echo "GENERATE_LATEX = NO" >> $TEMP_DIR/Doxyfile
echo "GENERATE_DOCSET = YES" >> $TEMP_DIR/Doxyfile
echo "DOCSET_FEEDNAME = $PRODUCT_NAME Documentation" >> $TEMP_DIR/Doxyfile
echo "DOCSET_BUNDLE_ID = $COMPANY_RDOMAIN_PREFIX.$PRODUCT_NAME" >> $TEMP_DIR/Doxyfile
# Run doxygen on the updated config file.
# Note: doxygen creates a Makefile that does most of the heavy lifting.
$DOXYGEN_PATH $TEMP_DIR/Doxyfile
# make will invoke docsetutil. Take a look at the Makefile to see how this is done.
make -C $DOCSET_PATH/html install
# Construct a temporary applescript file to tell Xcode to load a docset.
rm -f $TEMP_DIR/loadDocSet.scpt
echo "tell application \"Xcode\"" >> $TEMP_DIR/loadDocSet.scpt
echo "load documentation set with path \"/Users/$USER/Library/Developer/Shared/Documentation/DocSets/$COMPANY_RDOMAIN_PREFIX.$PRODUCT_NAME.docset\"" >> $TEMP_DIR/loadDocSet.scpt
echo "end tell" >> $TEMP_DIR/loadDocSet.scpt
# Run the load-docset applescript command.
osascript $TEMP_DIR/loadDocSet.scpt
exit 0
However, I am getting these errors
Osascript:/Users/[username]/SVN/trunk/Examples: No such file or directory
Earlier in the script output (in xcode window after building) I see these msgs:
Configuration file '/Users/[username]/SVN/trunk/Examples' created
the problem I think is that the full path is actually
'/Users/[username]/SVN/trunk/Examples using SDK'
I was working on the assumption that the whitespaces were the culprit. So I tried two approaches:
$SOURCE_ROOT = "/Users/[username]/SVN/trunk/Examples using SDK"
$SOURCE_ROOT = /Users/[username]/SVN/trunk/Examples\ using\ SDK
set $SOURCE_ROOT to quoted form of POSIX path of /Users/$USER/SVN/trunk/Examples\ using\ SDK/
but all give the same Osascript error as above. Also, the docset is not build into the requested directory
/Users/$USER/Library/Developer/Shared/Documentation/DocSets/$COMPANY_RDOMAIN_PREFIX.$PRODUCT_NAME.docset\
I've scratched my head over this for a while but can't figure out what is the problem. One hypothesis is that I am running Doxygen on a project that is not a new project. To handle this EXTRACT_ALL is set to YES (which should remove all warning messages, but I get 19 warnings too).
Any help would be much appreciated
thank you
Peyman
I suggest that you double quote "$SOURCE_ROOT" wherever you use it in your shell script.
Mouviciel....i figured it out....needed to put the whole variable in parenthesis i.e. $(SOURCE_ROOT).
thank you for your help

How can I include Win32 modules only when I'm running my Perl script on Windows?

I have a problem that I cannot seem to find an answer to.
With Perl I need to use a script across Windows and unix platforms. Te problem is that on Windows we use Win32-pecific modules like Win32::Process, and those modules do not exist on unix.
I need a way to include those Win32 modules only on Windows.
if($^O =~ /win/i)
{
use win32::process qw(CREATE_NEW_CONSOLE);
}
else
{
#unix fork
}
The problem lies in that use statement for windows. No matter what I try this does not compile on unix.
I have tried using dynamic evals, requires, BEGIN, etc.
Is there a good solution to this problem? Any help will be greatly appreciated.
Thanks in advance,
Dan
Update:
A coworker pointed out to me this is the correct way to do it.
require Win32;
require Win32::Process;
my $flag = Win32::Process::CREATE_NEW_CONSOLE();
Win32::Process::Create($process,
$program,
$cmd,
0,
$flag, ".") || die ErrorReport();
print "Child started, pid = " . getPID() . "\n";
Thank you all for your help!
Dan
use is executed at compile time.
Instead do:
BEGIN {
if( $^O eq 'MSWin32' ) {
require Win32::Process;
# import Win32::Process qw(CREATE_NEW_CONSOLE);
Win32::Process->import(qw/ CREATE_NEW_CONSOLE /);
}
else {
#unix fork
}
}
See the perldoc for use.
Also see perlvar on $^O.
Update:
As Sinan Unur points out, it is best to avoid indirect object syntax.
I use direct method calls in every case, except, with calls to import. Probably because import masquerades as a built-in. Since import is really a class method, it should be called as a class method.
Thanks, Sinan.
Also, on Win32 systems, you need to be very careful that you get the capitalization of your module names correct. Incorrect capitalization means that symbols won't be imported properly. It can get ugly.use win32::process may appear to work fine.
Are you sure win32::process can be loaded on OSX? "darwin" matches your /win/i.
You may want to use http://search.cpan.org/dist/Sys-Info-Base/ which tries to do the right thing.
That aside, can you post an example of the code that you actually are using, the failure message you're receiving, and on which unix platform (uname -a) ?
What about a parser that modifies the file on each OS?
You could parse your perl file via a configure script that works on both operating systems to output perl with the proper Use clauses. You could even bury the parse action in the executable script to launch the code.
Originally I was thinking of precompiler directives from C would do the trick, but I don't know perl very well.
Here's an answer to your second set of questions:
Are you using strict and warnings?
Did you define an ErrorReport() subroutine? ErrorReport() is just an example in the synopsis for Win32::Process.
CREATE_NEW_CONSOLE is probably not numeric because it didn't import properly. Check the capitalization in your call to import.
Compare these one-liners:
C:\>perl -Mwin32::process -e "print 'CNC: '. CREATE_NEW_CONSOLE;
CNC: CREATE_NEW_CONSOLE
C:\>perl -Mwin32::process -Mstrict -e "print 'CNC: '. CREATE_NEW_CONSOLE;
Bareword "CREATE_NEW_CONSOLE" not allowed while "strict subs" in use at -e line 1.
Execution of -e aborted due to compilation errors.
C:\>perl -MWin32::Process -e "print 'CNC: '. CREATE_NEW _CONSOLE;
CNC: 16
You could just place your platform specific code inside of an eval{}, and check for an error.
BEGIN{
eval{
require Win32::Process;
Win32::Process->import(qw'CREATE_NEW_CONSOLE');
};
if( $# ){ # $# is $EVAL_ERROR
# Unix code here
}
}

Resources