In perl, how do I use modules created with module-starter in the same directory? - perl-module

I have a bunch of scripts which I want to refactor into modules. This is the first time I'm doing something like this. I read online and Module::Starter seems to be one of the preferred ways of creating new modules. But how should I, during development, use the modules from other unrelated scripts? I don't want to build/install every module every time I modify it. Furthermore, how should I distribute scripts with modules in the same directory? (Ie, I want to distribute an application script.pl with Foo::Bar and Foo::Baz in the same tar ball, and I want 'perl script.pl' to just-work, especially on strawberry). Any hints?
> module-starter --module=Foo::Bar
Created Foo-Bar
Created Foo-Bar/lib/Foo
Created Foo-Bar/lib/Foo/Bar.pm
Created Foo-Bar/t
Created Foo-Bar/t/pod-coverage.t
Created Foo-Bar/t/pod.t
Created Foo-Bar/t/manifest.t
Created Foo-Bar/t/boilerplate.t
Created Foo-Bar/t/00-load.t
Created Foo-Bar/ignore.txt
Created Foo-Bar/Makefile.PL
Created Foo-Bar/Changes
Created Foo-Bar/README
Created Foo-Bar/MANIFEST
Created starter directories and files
> perl -MFoo::Bar -w -e ''
Can't locate Foo/Bar.pm in #INC (#INC contains: /etc/perl /usr/local/lib/perl/5.10.1 /usr/local/share/perl/5.10.1 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.10 /usr/share/perl/5.10 /usr/local/lib/site_perl .).
BEGIN failed--compilation aborted.

Add the directories you want to be included in the Perl Module search using the PERL5LIB environment variable:
export PERL5LIB=/somedir

Related

Install systemd service using autotools

I have an autotools project which successfully builds and tests an app (https://github.com/goglecm/AutoBrightnessCam). The app is installed in the bin directory (preceded by any prefix the user specifies). That's pretty straightforward. I now need to make a systemd service to start it at boot time. I've created the service file and ran it manually and it works fine.
The last bit is to tell configure.ac and Makefile.am to patch a *.service.in file with the correct path for the app (just like config.h is created from config.h.in).
Will using AC_CONFIG_HEADERS be appropriate to patch *.service.in into *.service? Is there another macro used for "non-headers" perhaps?
Also, how do I specify that the service file should land (i.e. installed) in /etc/systemd/system?
Is there perhaps a better way of starting this app at boot time without systemd?
How do I specify that the service file should land (i.e. installed) in /etc/systemd/system?
According to Systemd's daemon man page:
<BEGINQUOTE>
Installing systemd Service Files
At the build installation time (e.g. make install during package build), packages are recommended to install their systemd unit files in the directory returned by pkg-config systemd --variable=systemdsystemunitdir (for system services) or pkg-config systemd --variable=systemduserunitdir (for user services). This will make the services available in the system on explicit request but not activate them automatically during boot. Optionally, during package installation (e.g. rpm -i by the administrator), symlinks should be created in the systemd configuration directories via the enable command of the systemctl(1) tool to activate them automatically on boot.
Packages using autoconf(1) are recommended to use a configure script excerpt like the following to determine the unit installation path during source configuration:
PKG_PROG_PKG_CONFIG
AC_ARG_WITH([systemdsystemunitdir],
[AS_HELP_STRING([--with-systemdsystemunitdir=DIR], [Directory for systemd service files])],,
[with_systemdsystemunitdir=auto])
AS_IF([test "x$with_systemdsystemunitdir" = "xyes" -o "x$with_systemdsystemunitdir" = "xauto"], [
def_systemdsystemunitdir=$($PKG_CONFIG --variable=systemdsystemunitdir systemd)
AS_IF([test "x$def_systemdsystemunitdir" = "x"],
[AS_IF([test "x$with_systemdsystemunitdir" = "xyes"],
[AC_MSG_ERROR([systemd support requested but pkg-config unable to query systemd package])])
with_systemdsystemunitdir=no],
[with_systemdsystemunitdir="$def_systemdsystemunitdir"])])
AS_IF([test "x$with_systemdsystemunitdir" != "xno"],
[AC_SUBST([systemdsystemunitdir], [$with_systemdsystemunitdir])])
AM_CONDITIONAL([HAVE_SYSTEMD], [test "x$with_systemdsystemunitdir" != "xno"])
This snippet allows automatic installation of the unit files on systemd machines, and optionally allows their installation even on machines lacking systemd. (Modification of this snippet for the user unit directory is left as an exercise for the reader.)
Additionally, to ensure that make distcheck continues to work, it is recommended to add the following to the top-level Makefile.am file in automake(1)-based projects:
AM_DISTCHECK_CONFIGURE_FLAGS = \
--with-systemdsystemunitdir=$$dc_install_base/$(systemdsystemunitdir)
Finally, unit files should be installed in the system with an automake excerpt like the following:
if HAVE_SYSTEMD
systemdsystemunit_DATA = \
foobar.socket \
foobar.service
endif
...
</ENDQUOTE>
So it appears you should use systemdsystemunitdir and systemduserunitdir. How well Autotools supports it, well...
A quick grep on Fedora 31 using grep systemdsystemunitdir /bin/autoconf and grep -IR systemdsystemunitdir /usr/share shows no Autotools support yet. 7 years and counting...
Is there perhaps a better way of starting this app at boot time without systemd?
Systemd should be OK to start your app. Simply use systemctl(1) to enable and start them as you normally would.
Based on your GitHub and autobrightnesscam.service.in, I would not dick around with Autotools for this. You can waste copious amounts of time working around Autotols short comings (speaking from experience).
My configure.ac script (which is just a shell script) would copy autobrightnesscam.service.in to autobrightnesscam.service, and then use sed to copy-in the correct directories and files. Then, I would copy the updated autobrightnesscam.service to its proper location in AC_CONFIG_COMMANDS_POST. Maybe something like:
SERVICE_FILE=autobrightnesscam.service
SYSTEMD_DIR=`pkg-config systemd --variable=systemdsystemunitdir`
# Use default if SYSTEMD_DIR is empty
if test x"$SYSTEMD_DIR" = "x"; then
SYSTEMD_DIR=/etc/systemd/system
fi
AC_CONFIG_COMMANDS_POST([cp "$SERVICE_FILE" "$SYSTEMD_DIR"])
AC_CONFIG_COMMANDS_POST([systemctl enable "$SYSTEMD_DIR/$SERVICE_FILE"])
AC_CONFIG_COMMANDS_POST([systemctl start "$SERVICE_FILE"])
Will using AC_CONFIG_HEADERS be appropriate to patch *.service.in into *.service? Is there another macro used for "non-headers" perhaps?
No. AC_CONFIG_HEADERS is for setting up configuration headers to support your build. It is rarely used for anything other than building a config.h recording the results of certain tests that Autoconf performs, and it is not as flexible as other options in this area.
If you have additional files that you want Autoconf to build from templates then you should tell Autoconf about them via AC_CONFIG_FILES. Example:
AC_CONFIG_FILES([Makefile AutoBrightnessCam.service])
But if some of the data with which you are filling that template are installation directories then Autoconf is probably not the right place to do this at all, because it makes provision for the installation prefix to be changed by arguments to make. You would at least need to work around that, but the best thing to do is to roll with it instead, and build the .service file under make's control. It's not that hard, and there are several technical advantages, some applying even if there aren't any installation directory substitutions to worry about.
You can do it the same way that configure does, by running the very same template you're already using through sed, with an appropriate script. Something like this would appear in your Makefile.am:
SERVICE_SUBS = \
s,[#]VARIABLE_NAME[#],$(VARIABLE_NAME),g; \
s,[#]OTHER_VARIABLE[#],$(OTHER_VARIABLE),g
AutoBrightnessCam.service: AutoBrightnessCam.service.in
$(SED) -e '$(SERVICE_SUBS)' < $< > $#
Also, how do I specify that the service file should land (i.e.
installed) in /etc/systemd/system?
You use Automake's standard mechanism for specifying custom installation locations. Maybe something like this:
sytemdsysdir = $(sysconfdir)/systemd/system
systemdsys_DATA = AutoBrightnessCam.service
Is there perhaps a better way of
starting this app at boot time without systemd?
On a systemd-based machine, systemd is in control of what starts at boot. If you want the machine to start your application automatically at boot, then I think your options are limited to
Configuring systemd to start it
Configuring something in a chain of programs ultimately started by systemd to start it
Hacking the bootloader or kernel to start it
There is room for diverging opinions here, but I think the first of those is cleanest and most future-proof, and I cannot recommend the last.

Puppet - how to pass arguments to the command line

I am newbie to puppet and I wonder how I can pass arguments to the command line. I will explain myself:
This is the command that I'm running (puppet apply):
C:>puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\site.pp
Site.pp:
File { backup => false }
node default {
include 'tn'
}
It means that I am running 'tn' which is one of the modules in my puppet project.
For example,
I have these modules in my puppet project:
tn
ps
av
So to run each module I need to go to this site.pp file and change it to
include 'ps'
or
include 'av'
My question is -
How do I pass these modules as arguments to the puppet apply command?
I know that I can create 3 .pp files that each one contains one module (ps, av, tn)
And then my command will look like:
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\ps.pp
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\av.pp
puppet apply --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\tn.pp
But, I think it's not a good solution..
Is there another way to pass these modules as arguments to the puppet apply?
If I didn't mention - each module is responsible for different actions.
thanks !!!
I know that I can create 3 .pp files that each one contains one module
(ps, av, tn)
[...]
But, I think it's not a good solution.
Why isn't it a good solution? It seems perfectly sensible to me that if you have three different things you want to be able to do, then you have a separate file to use to accomplish each.
Nevertheless, if your modules do not use each other, then you could probably accomplish what you describe by relying on tags. Have your site manifest include all three modules:
File { backup => false }
node default {
include 'tn'
include 'ps'
include 'av'
}
Then use the --tags option to select only one of those modules and all the other classes it brings in:
puppet apply --tags ps --environment test -l C:\Puppet_logs\log.log C:\ProgramData\PuppetLabs\code\environments\test\manifests\site.pp
A pp file is a class file not a module, a module contains the classes and anything else needed to support/test those classes, take a look at https://puppet.com/docs/puppet/5.5/modules_fundamentals.html.
Look at how modules are laid out on https://forge.puppet.com/
It’s well worth looking at the PDK https://puppet.com/docs/pdk/1.x/pdk.html as it'll build a module for you, you just need to add the classes.
In your case you probably want to create a new module (let’s call it mymodule) and in that module put all your tn.pp ps.pp and av.pp class files under the C:\ProgramData\PuppetLabs\code\environments\test\modules\mymodule\manifests directory.
Then for local testing use the examples pattern, so in your module you’ll have an examples directory and in there you might have a file called ps.pp which would contain include mymodule::ps to include that ps.pp class file.
The aim of the examples directory is to give you a method of passing in parameters for local testing.
Back in your site.pp file you’d apply is with:
Node default {
Include mymodule::ps
}
So now you want to apply different classes to the nodes and there you hit the world of node classification and there are many ways you can do that. In your case I think you’re probably doing this on a small scale so you’d have;
Node psserver.example.com {
Include mymodule::ps
}
Node tnserver.example.com {
Include mymodule::tn
}
Have a look at some of the online training https://puppet.com/learning-training/kits/puppet-language-basics

Creating Ruby Main (command line utility) program with multiple files

I am trying to use the main gem for making command line utilities. This was presented in a recent Ruby Rogues podcast.
If I put all the code in one file and require that file, then rspec gives me an error, as the main dsl regards rpsec as a command line invocation of the main utility.
I can break out a method into a new file and have rspec require that file. Suppose you have this program, but want to put the do_something method in a separate file to test with rspec:
require 'main'
def do_something(foo)
puts "foo is #{foo}"
end
Main {
argument('foo'){
required # this is the default
cast :int # value cast to Fixnum
validate{|foo| foo == 42} # raises error in failure case
description 'the foo param' # shown in --help
}
do_something(arguments['foo'].value)
}
What is the convenient way to distribute/deploy a ruby command line program with multiple files? Maybe create a gem?
You are on the right track for testing - basically you want your "logic" in separate files so you can unit test them. You can then use something like Aruba to do an integration test.
With multiple files, your best bet is to distribute it as a RubyGem. There's lots of resources out there, but the gist of it is:
Put your executable in bin
Put your files in lib/YOUR_APP/whatever.rb where "YOUR_APP" is the name of your app. I'd also recommend namespacing your classes with modules named for your app
In your executable, require the files in lib as if lib were in the load path
In your gemspec, make sure to indicate what your bin files are and what your lib files are (if you generate it with bundle gem and are using git, you should be good to go)
This way, your app will have access to the files in lib at runtime, when installed with RubyGems. In development, you will need to either do bundle exec bin/my_app or RUBYLIB=lib bin/my_app. Point is, RubyGems takes care of the load path at runtime, but not at development time.

How can I configure Module::Build to NOT install files as read-only?

I've encountered a scenario where I'm building a Perl module as part of another Build system on a Windows machine. I use the --install_base option of Module::Build to specify a temporary directory to put module files until the overall build system can use them. Unfortunately, that other Build system has a problem if any of its files that it depends on are read only - it tries to delete any generated files before rebuilding them, and it can't clean any read-only files (it tries to delete it, and it's read only, which gives an error.) By default, Module::Build installs its libraries with the read-only bit enabled.
One option would be to make a new step in the build process that removes the read-only bit from the installed files, but due to the nature of the build tool that will require a second temporary directory...ugh.
Is it possible to configure a Module::Build based installer to NOT enable that read-only bit when the files are installed to the --install_base directory? If so, how?
No, it's not a configurable option. It's done in the copy_if_modified method in Module::Build::Base:
# mode is read-only + (executable if source is executable)
my $mode = oct(444) | ( $self->is_executable($file) ? oct(111) : 0 );
chmod( $mode, $to_path );
If you controlled the Build.PL, you could subclass Module::Build and override copy_if_modified to call the base class and then chmod the file writable. But I get the impression you're just trying to install someone else's module.
Probably the easiest thing to do would be to install a copy of Module::Build in a private directory, then edit it to use oct(666) (or whatever mode you want). Then invoke perl -I /path/to/customized/Module/Build Build.PL. Or, (as you said) just use the standard Module::Build and add a separate step to mark everything writable afterwards.
Update: ysth is right; it's ExtUtils::Install that actually does the final copy. copy_if_modified is for populating blib. But ExtUtils::Install also hardcodes the mode to read-only. You could use a customized version of ExtUtils::Install, but it's probably easier to just have a separate step.

Mogenerator and Xcode 4

I just installed mogenerator+xmo'd on my development machine and would like to start playing with it. The only instructions I could really find online were from a previous SO post, and those don't work with XCode 4 (or at least ⌘I doesn't pull up metadata any more and I don't know how).
So to get things up and running, is all that needs to happen to add xmod in the .xcdatamodeld's comments (wherever they are) and the classes will be generated/updated on save from then on?
While trying to find this answer myself, I found MOGenerator and Xcode 4 integration guide on esenciadev.com. This solution is not a push-button integration, but it works. The link has detailed instructions, but generally you:
Copy the shell scripts into your project
Add build rules to your target to run the two shell scripts
When you build your project, the script runs MOGenerator on all .xcdatamodel files in your project directory. After the build, if the script generates new class files, you must manually add them to your project. Subsequent builds will remember existing MO-Generated files.
Caveats:
The example's build rule assumes you put the scripts into a /scripts/ file folder within your project directory. When I ignored this detail (creating a project folder but not a file folder) I got a build error. Make sure the build rule points to the script's file location.
The script uses the --base-class argument. Unless your model classes are subclasses of a custom class (not NSManagedObject), you must delete this argument from the script. E.g.,
mogenerator --model "${INPUT_FILE_PATH}/$curVer" --output-dir "${INPUT_FILE_DIR}/" --base-class $baseClass
Now that Xcode 4 is released Take a look at the Issues page for mogenerator
After I make changes to my model file, I just run mogenerator manually from the terminal. Using Xcode 4 and ARC, this does the trick:
cd <directory of model file>
mogenerator --model <your model>.xcdatamodeld/<current version>.xcdatamodel --template-var arc=YES
Maybe I'll use build scripts at some point, but the terminal approach is too simple to screw up.
I've found a Script in the "Build Phases" to be more reliable than the "Build Rules".
Under "Build Phases" for your Target, choose the button at the bottom to "Add Run Script". Drag the run script to the top so that it executes before compiling sources.
Remember that the actual data model files (.xcdatamodel) are contained within a package (.xcdatamodeld), and that you only need to compile the latest data model for your project.
Add the following to the script (replacing text in angle-brackets as appropriate)
MODELS_DIR="${PROJECT_DIR}/<path to your models without trailing slash>"
DATA_MODEL_PACKAGE="$MODELS_DIR/<your model name>.xcdatamodeld"
CURRENT_VERSION=`/usr/libexec/PlistBuddy "$DATA_MODEL_PACKAGE/.xccurrentversion" -c 'print _XCCurrentVersionName'`
# Mogenerator Location
if [ -x /usr/local/bin/mogenerator ]; then
echo "mogenerator exists in /usr/local/bin path";
MOGENERATOR_DIR="/usr/local/bin";
elif [ -x /usr/bin/mogenerator ]; then
echo "mogenerator exists in /usr/bin path";
MOGENERATOR_DIR="/usr/bin";
else
echo "mogenerator not found"; exit 1;
fi
$MOGENERATOR_DIR/mogenerator --model "$DATA_MODEL_PACKAGE/$CURRENT_VERSION" --output-dir "$MODELS_DIR/"
Add options to mogenerator as appropriate. --base-class <your base class> and --template-var arc=true are common.
Random tip. If you get Illegal Instruction: 4 when you run mogenerator. Install it from the command line:
$ brew update && brew upgrade mogenerator

Resources