Creating a go binary as debian binary package for a custom repository - go

I am trying to package a binary written in Go as a debian/ubuntu binary package. This would be available for download from a custom web server and apt key.
I am - very - confused.
I looked at https://wiki.debian.org/Packaging/Intro?action=show&redirect=IntroDebianPackaging first. This looks like exhaustive, without wanting to say too complex.
What confuses me is that the file debian/rules contains a make command. But we don't have a Makefile, do I need to create one?
In fact I am at the debuild -us -uc step, and it obviously failed.
Then I saw this: https://askubuntu.com/a/251892 , where it says:
Avoid the Debian bureaucracy by just building the binary:
dpkg-buildpackage -b
I did that, and the command completed, but looking at the generated package there it doesn't contain any binaries, only a changelog.gz and a copyright file in the app folder below /usr/share/doc.
So I am lost, I have no idea which tutorial to follow here to build a binary custom package, which, btw, later will be available for download signed. Obviously it is my first debian/ubuntu package I am creating.

I'm currently working on a project written in go that is distributed as a .deb package in a repository.
I have to tell you that it is not easy to find documentation about it. I use fpm for that activity.
First create in a folder that represents your project on the destination computer. For example "/tmp/proj"
Inside that folder you must put everything you want to distribute in the package. For example, if your compiled binary is called "myapp" and you want to put it in "/usr/bin/", then you have to create the folder "/tmp/proj/usr/bin" and put in it the executable file with the permissions you will use.
This way with all the files you want to distribute.
Then create a script that you will use to generate the package:
PKG_NAME= application name, one word, lowercase
PKG_DESCRIPTION= Brief description of the package
PKG_VERSION= Version, in x.y.z format
PKG_RELEASE= Correlative number from 1 onwards
PKG_MAINTAINER= Your name and email. Format: "name" < email >
PKG_VENDOR= Your company name
PKG_URL= URL of your product
FPM_OPTS="-n $PKG_NAME -v $PKG_VERSION --iteration $PKG_RELEASE"
fpm -s dir -t deb ${FPM_OPTS} -f \
-maintainer "$PKG_MAINTAINER" \
--vendor "$PKG_VENDOR" \
--url "$PKG_URL" \
--description "$PKG_DESCRIPTION" \
--architecture "amd64" \
-C /tmp/proj \
.
And that's it!
Well, there's a lot to learn.

Related

How do you make xdg-utils detect a file association for my application from a distribution package?

I am building a distribution package for xnec2c and I want .nec files to associate with xnec2c and also display the associated icon.
(This question is different than the many xdg-utils answers out there because it asks about packaging and whether xdg-* calls are actually necessary.)
I know I can register them with these:
xdg-mime install --mode system x-nec2c.xml
xdg-icon-resource install --mode system --context mimetypes --size 256 xnec2c.png application-x-nec2
xdg-mime default xnec2c.desktop # but `man` says not as root?
Should it be sufficient just to drop the .xml and .desktop and .png in the right place on the filesystem?
Does the post-install in the package need to run some or all of these commands?
I read here that the icon would be application-x-nec2 and not application/x-nec2. Is this correct?
I would like the files to work by placing them in the right spot without running the xdg-* tools if supported.
(The package is a .rpm, but type and distro shouldn't matter since xdg is a standard. I want the same basic process to work for .deb, too.)
Everything should associate as the mime type application/x-nec2 since "nec2" is the file format for ".nec" files. This is reflected in the .desktop and .xml definitions below unless I have an error. Please correct me if I have something out of place!
Here are the relevant files that deploy:
/usr/share/applications/xnec2c.desktop (see below)
/usr/share/mime/packages/x-nec2.xml (see below)
/usr/share/pixmaps/xnec2c.svg
/usr/share/icons/hicolor/256x256/apps/xnec2c.png
/usr/share/icons/hicolor/scalable/apps/xnec2c.svg
and here are is the content:
==> /usr/share/applications/xnec2c.desktop <==
[Desktop Entry]
Name=Xnec2c
GenericName=NEC2 Simulator
Comment=Numerical Electromagnetics Code software
Exec=xnec2c
Icon=xnec2c
Terminal=false
Type=Application
Categories=Science;
Keywords=nec2;em;simulator;
X-Desktop-File-Install-Version=0.22
MimeType=application/x-nec2;
==> /usr/share/mime/packages/x-nec2.xml <==
<?xml version="1.0"?>
<mime-info xmlns='http://www.freedesktop.org/standards/shared-mime-info'>
<mime-type type="application/x-nec2">
<comment>NEC2 Antenna Simulation File</comment>
<icon name="application-x-nec2"/>
<glob-deleteall/>
<glob pattern="*.nec"/>
</mime-type>
</mime-info>
You don't need to run the mime-* tools, but you do need to update the desktop and icon associations for the freedesktop environment after install like so:
update-mime-database /usr/share/mime
update-desktop-database
gtk-update-icon-cache /usr/share/icons/hicolor
These should be run after uninstall, too, to cleanup after removal. In RPMs this is the %postun section.
Substitute /usr/share with whatever deployment variable you might use like %{_datadir} for RPM .spec's.
Also the icon name had an issue. It should have just been named 'xnec2c' not 'application-x-nec2' because the 'xnec2c' icon gets installed with the installer as follows:
%{_datadir}/icons/hicolor/scalable/apps/%{name}.svg
%{_datadir}/icons/hicolor/256x256/apps/%{name}.png

Install systemd service using autotools

I have an autotools project which successfully builds and tests an app (https://github.com/goglecm/AutoBrightnessCam). The app is installed in the bin directory (preceded by any prefix the user specifies). That's pretty straightforward. I now need to make a systemd service to start it at boot time. I've created the service file and ran it manually and it works fine.
The last bit is to tell configure.ac and Makefile.am to patch a *.service.in file with the correct path for the app (just like config.h is created from config.h.in).
Will using AC_CONFIG_HEADERS be appropriate to patch *.service.in into *.service? Is there another macro used for "non-headers" perhaps?
Also, how do I specify that the service file should land (i.e. installed) in /etc/systemd/system?
Is there perhaps a better way of starting this app at boot time without systemd?
How do I specify that the service file should land (i.e. installed) in /etc/systemd/system?
According to Systemd's daemon man page:
<BEGINQUOTE>
Installing systemd Service Files
At the build installation time (e.g. make install during package build), packages are recommended to install their systemd unit files in the directory returned by pkg-config systemd --variable=systemdsystemunitdir (for system services) or pkg-config systemd --variable=systemduserunitdir (for user services). This will make the services available in the system on explicit request but not activate them automatically during boot. Optionally, during package installation (e.g. rpm -i by the administrator), symlinks should be created in the systemd configuration directories via the enable command of the systemctl(1) tool to activate them automatically on boot.
Packages using autoconf(1) are recommended to use a configure script excerpt like the following to determine the unit installation path during source configuration:
PKG_PROG_PKG_CONFIG
AC_ARG_WITH([systemdsystemunitdir],
[AS_HELP_STRING([--with-systemdsystemunitdir=DIR], [Directory for systemd service files])],,
[with_systemdsystemunitdir=auto])
AS_IF([test "x$with_systemdsystemunitdir" = "xyes" -o "x$with_systemdsystemunitdir" = "xauto"], [
def_systemdsystemunitdir=$($PKG_CONFIG --variable=systemdsystemunitdir systemd)
AS_IF([test "x$def_systemdsystemunitdir" = "x"],
[AS_IF([test "x$with_systemdsystemunitdir" = "xyes"],
[AC_MSG_ERROR([systemd support requested but pkg-config unable to query systemd package])])
with_systemdsystemunitdir=no],
[with_systemdsystemunitdir="$def_systemdsystemunitdir"])])
AS_IF([test "x$with_systemdsystemunitdir" != "xno"],
[AC_SUBST([systemdsystemunitdir], [$with_systemdsystemunitdir])])
AM_CONDITIONAL([HAVE_SYSTEMD], [test "x$with_systemdsystemunitdir" != "xno"])
This snippet allows automatic installation of the unit files on systemd machines, and optionally allows their installation even on machines lacking systemd. (Modification of this snippet for the user unit directory is left as an exercise for the reader.)
Additionally, to ensure that make distcheck continues to work, it is recommended to add the following to the top-level Makefile.am file in automake(1)-based projects:
AM_DISTCHECK_CONFIGURE_FLAGS = \
--with-systemdsystemunitdir=$$dc_install_base/$(systemdsystemunitdir)
Finally, unit files should be installed in the system with an automake excerpt like the following:
if HAVE_SYSTEMD
systemdsystemunit_DATA = \
foobar.socket \
foobar.service
endif
...
</ENDQUOTE>
So it appears you should use systemdsystemunitdir and systemduserunitdir. How well Autotools supports it, well...
A quick grep on Fedora 31 using grep systemdsystemunitdir /bin/autoconf and grep -IR systemdsystemunitdir /usr/share shows no Autotools support yet. 7 years and counting...
Is there perhaps a better way of starting this app at boot time without systemd?
Systemd should be OK to start your app. Simply use systemctl(1) to enable and start them as you normally would.
Based on your GitHub and autobrightnesscam.service.in, I would not dick around with Autotools for this. You can waste copious amounts of time working around Autotols short comings (speaking from experience).
My configure.ac script (which is just a shell script) would copy autobrightnesscam.service.in to autobrightnesscam.service, and then use sed to copy-in the correct directories and files. Then, I would copy the updated autobrightnesscam.service to its proper location in AC_CONFIG_COMMANDS_POST. Maybe something like:
SERVICE_FILE=autobrightnesscam.service
SYSTEMD_DIR=`pkg-config systemd --variable=systemdsystemunitdir`
# Use default if SYSTEMD_DIR is empty
if test x"$SYSTEMD_DIR" = "x"; then
SYSTEMD_DIR=/etc/systemd/system
fi
AC_CONFIG_COMMANDS_POST([cp "$SERVICE_FILE" "$SYSTEMD_DIR"])
AC_CONFIG_COMMANDS_POST([systemctl enable "$SYSTEMD_DIR/$SERVICE_FILE"])
AC_CONFIG_COMMANDS_POST([systemctl start "$SERVICE_FILE"])
Will using AC_CONFIG_HEADERS be appropriate to patch *.service.in into *.service? Is there another macro used for "non-headers" perhaps?
No. AC_CONFIG_HEADERS is for setting up configuration headers to support your build. It is rarely used for anything other than building a config.h recording the results of certain tests that Autoconf performs, and it is not as flexible as other options in this area.
If you have additional files that you want Autoconf to build from templates then you should tell Autoconf about them via AC_CONFIG_FILES. Example:
AC_CONFIG_FILES([Makefile AutoBrightnessCam.service])
But if some of the data with which you are filling that template are installation directories then Autoconf is probably not the right place to do this at all, because it makes provision for the installation prefix to be changed by arguments to make. You would at least need to work around that, but the best thing to do is to roll with it instead, and build the .service file under make's control. It's not that hard, and there are several technical advantages, some applying even if there aren't any installation directory substitutions to worry about.
You can do it the same way that configure does, by running the very same template you're already using through sed, with an appropriate script. Something like this would appear in your Makefile.am:
SERVICE_SUBS = \
s,[#]VARIABLE_NAME[#],$(VARIABLE_NAME),g; \
s,[#]OTHER_VARIABLE[#],$(OTHER_VARIABLE),g
AutoBrightnessCam.service: AutoBrightnessCam.service.in
$(SED) -e '$(SERVICE_SUBS)' < $< > $#
Also, how do I specify that the service file should land (i.e.
installed) in /etc/systemd/system?
You use Automake's standard mechanism for specifying custom installation locations. Maybe something like this:
sytemdsysdir = $(sysconfdir)/systemd/system
systemdsys_DATA = AutoBrightnessCam.service
Is there perhaps a better way of
starting this app at boot time without systemd?
On a systemd-based machine, systemd is in control of what starts at boot. If you want the machine to start your application automatically at boot, then I think your options are limited to
Configuring systemd to start it
Configuring something in a chain of programs ultimately started by systemd to start it
Hacking the bootloader or kernel to start it
There is room for diverging opinions here, but I think the first of those is cleanest and most future-proof, and I cannot recommend the last.

How to package a Kivy app with Pyinstaller

I have a lot of troubles following the instructions form the Kivy website, many steps aren't explained like what should I answer to the warning.
WARNING: The output directory "..." and ALL ITS CONTENTS will be REMOVED! Continue? (y/n)
Even if I choose y, the folder isn't removed.
Also should I always add these lines:
from kivy.deps import sdl2, glew
Tree('C:\\Users\\<username>\\Desktop\\MyApp\\'),
*[Tree(p) for p in (sdl2.dep_bins + glew.dep_bins)]
in the .spec file? Why are they necessary?
Not many info is available for Kivy.
Because I spent a lot of time understanding how I should package my app, here are some instructions that would have really helped me.
Some info are available at http://pythonhosted.org/PyInstaller/
Python 3.6 as of march 2017
Because packaging my app gave me the error IndexError: tuple index out of range, I had to install the developement version of PyInstaller:
pip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip
Step 1:
I moved all the files of MyApp in a folder "C:\Users\<username>\Desktop\MyApp": the .py, the .kv and the images and I created an icon.ico.
I created another folder C:\Users\<username>\Desktop\MyPackagedApp. In this folder I press Shift+right click and select open command window here.
Then I pasted this:
python -m PyInstaller --name MyApp --icon "C:\Users\<username>\Desktop\MyApp\icon.ico" "C:\Users\<username>\Desktop\MyApp\myapp.py"
This creates two folders, build and dist, and a .spec file. In dist/MyApp, I can find a .exe. Apparently, if my app is really simple (just one label), the packaged app can works without the Step 2.
Step 2:
The second step involves editing the .spec file. Here is an exemple of mine.
(cf Step 3, for the explanations about my_hidden_modules)
I go back to the cmd, and enter
python -m MyApp myapp.spec
I then got this warning:
WARNING: The output directory "..." and ALL ITS CONTENTS will be REMOVED! Continue? (y/n)
I enter y and then press enter.
Because I choosed y, I was surpised that the folder build was still there and that the dist/MyApp was still containing many files. But this is normal. PyInstaller can output a single file .exe or a single folder which contains all the script’s dependencies and an executable file. But the default output is a single folder with multiple files.
Step 3: adding hidden modules
When I click on the myapp.exe in dist/MyApp, the app crashed. In the log C:\Users\.kivy\logs\ I could find 2 errors: ModuleNotFoundError: No module named 'win32timezone' and SystemError: <class '_frozen_importlib._ModuleLockManager'>.
Because of this I had to edit the .spec file and add these lines:
my_hidden_modules = [
( 'C:\\Users\\<username>\\AppData\\Local\\Programs\\Python\\Python36\\Lib\\site-packages\\win32\\lib\\win32timezone.py', '.' )
]
in a = Analysis I changed datas = [] to datas = my_hidden_modules,
Apparently this is because I used a FileChooser widget.
So, the line:
ALL ITS CONTENTS will be REMOVED!
yes, it will be removed AND replaced later with new files. Check the date. I think it prints permission denied if it can't do such a thin both for files and the whole folder, so you'd notice it. It's important though, because you need to add additional files into your folder.
Those additional files of two types:
kivy dependencies
application data
Dependencies are just binaries (+/- loaders, licenses, or so), you get them through the *[Tree(p) ...] piece of code, which is just a command for "get all files from that folder". Without them Kivy won't even start.
Similarly to that, the second Tree(<app folder>) does the same, but for your own files such as .py files, .kv files, images, music, databases, basically whatever you create.
Obviously if you remove the deps, app won't start and if you remove app data, you'll get some path errors and most likely crash. You don't want any of that :P
It also works if in the 'a = Analysis...' block in the spec file one substitutes
hiddenimports=[]
for
hiddenimports=['win32file', 'win32timezone']
for win32file, win32timezone or for whatever files are missing

How can I configure Module::Build to NOT install files as read-only?

I've encountered a scenario where I'm building a Perl module as part of another Build system on a Windows machine. I use the --install_base option of Module::Build to specify a temporary directory to put module files until the overall build system can use them. Unfortunately, that other Build system has a problem if any of its files that it depends on are read only - it tries to delete any generated files before rebuilding them, and it can't clean any read-only files (it tries to delete it, and it's read only, which gives an error.) By default, Module::Build installs its libraries with the read-only bit enabled.
One option would be to make a new step in the build process that removes the read-only bit from the installed files, but due to the nature of the build tool that will require a second temporary directory...ugh.
Is it possible to configure a Module::Build based installer to NOT enable that read-only bit when the files are installed to the --install_base directory? If so, how?
No, it's not a configurable option. It's done in the copy_if_modified method in Module::Build::Base:
# mode is read-only + (executable if source is executable)
my $mode = oct(444) | ( $self->is_executable($file) ? oct(111) : 0 );
chmod( $mode, $to_path );
If you controlled the Build.PL, you could subclass Module::Build and override copy_if_modified to call the base class and then chmod the file writable. But I get the impression you're just trying to install someone else's module.
Probably the easiest thing to do would be to install a copy of Module::Build in a private directory, then edit it to use oct(666) (or whatever mode you want). Then invoke perl -I /path/to/customized/Module/Build Build.PL. Or, (as you said) just use the standard Module::Build and add a separate step to mark everything writable afterwards.
Update: ysth is right; it's ExtUtils::Install that actually does the final copy. copy_if_modified is for populating blib. But ExtUtils::Install also hardcodes the mode to read-only. You could use a customized version of ExtUtils::Install, but it's probably easier to just have a separate step.

Mogenerator and Xcode 4

I just installed mogenerator+xmo'd on my development machine and would like to start playing with it. The only instructions I could really find online were from a previous SO post, and those don't work with XCode 4 (or at least ⌘I doesn't pull up metadata any more and I don't know how).
So to get things up and running, is all that needs to happen to add xmod in the .xcdatamodeld's comments (wherever they are) and the classes will be generated/updated on save from then on?
While trying to find this answer myself, I found MOGenerator and Xcode 4 integration guide on esenciadev.com. This solution is not a push-button integration, but it works. The link has detailed instructions, but generally you:
Copy the shell scripts into your project
Add build rules to your target to run the two shell scripts
When you build your project, the script runs MOGenerator on all .xcdatamodel files in your project directory. After the build, if the script generates new class files, you must manually add them to your project. Subsequent builds will remember existing MO-Generated files.
Caveats:
The example's build rule assumes you put the scripts into a /scripts/ file folder within your project directory. When I ignored this detail (creating a project folder but not a file folder) I got a build error. Make sure the build rule points to the script's file location.
The script uses the --base-class argument. Unless your model classes are subclasses of a custom class (not NSManagedObject), you must delete this argument from the script. E.g.,
mogenerator --model "${INPUT_FILE_PATH}/$curVer" --output-dir "${INPUT_FILE_DIR}/" --base-class $baseClass
Now that Xcode 4 is released Take a look at the Issues page for mogenerator
After I make changes to my model file, I just run mogenerator manually from the terminal. Using Xcode 4 and ARC, this does the trick:
cd <directory of model file>
mogenerator --model <your model>.xcdatamodeld/<current version>.xcdatamodel --template-var arc=YES
Maybe I'll use build scripts at some point, but the terminal approach is too simple to screw up.
I've found a Script in the "Build Phases" to be more reliable than the "Build Rules".
Under "Build Phases" for your Target, choose the button at the bottom to "Add Run Script". Drag the run script to the top so that it executes before compiling sources.
Remember that the actual data model files (.xcdatamodel) are contained within a package (.xcdatamodeld), and that you only need to compile the latest data model for your project.
Add the following to the script (replacing text in angle-brackets as appropriate)
MODELS_DIR="${PROJECT_DIR}/<path to your models without trailing slash>"
DATA_MODEL_PACKAGE="$MODELS_DIR/<your model name>.xcdatamodeld"
CURRENT_VERSION=`/usr/libexec/PlistBuddy "$DATA_MODEL_PACKAGE/.xccurrentversion" -c 'print _XCCurrentVersionName'`
# Mogenerator Location
if [ -x /usr/local/bin/mogenerator ]; then
echo "mogenerator exists in /usr/local/bin path";
MOGENERATOR_DIR="/usr/local/bin";
elif [ -x /usr/bin/mogenerator ]; then
echo "mogenerator exists in /usr/bin path";
MOGENERATOR_DIR="/usr/bin";
else
echo "mogenerator not found"; exit 1;
fi
$MOGENERATOR_DIR/mogenerator --model "$DATA_MODEL_PACKAGE/$CURRENT_VERSION" --output-dir "$MODELS_DIR/"
Add options to mogenerator as appropriate. --base-class <your base class> and --template-var arc=true are common.
Random tip. If you get Illegal Instruction: 4 when you run mogenerator. Install it from the command line:
$ brew update && brew upgrade mogenerator

Resources