Differences between RPM created with rpmbuild and Os-nebula-rpm plugin - gradle

I am trying to use the Nebula rpm plugin for Gradle to build RPMs. I am finding the following discrepancy between RPMs built this way and RPMs built the traditional way, with spec files and rpmbuild.
In a spec file, you might have something like this:
%dir /usr/local/myapp/logs
This would create the directory /usr/local/myapp/logs when the rpm is installed. Once myapp starts to run it would write logs to this directory. When the app is uninstalled, rpm would understand that the files under /usr/local/myapp/logs were not created by the rpm installation process and therefore not delete this directory or the files within it. If the directory were empty at the time of uninstallation, then the directory would be removed.
There is a similar directive with the Gradle plugin. If you include
directory('/usr/local/myapp/logs')
in the build script, this directory will be created similar to the rpm process. However, in this instance, when the rpm is uninstalled, the directory and any files that have been added within it since installation will be removed.
I am trying to account for this difference. The RPM plugin is based on the redline-rpm java package, and from looking at the source, and the usual RedHat rpm documentation, I cannot find any setting that governs this behavior.
Can anyone hazard a guess what might be going on here to create this difference in behavior?
Update: this post has some pretty good information on how this works, but I still don't know the name of any directive that alters this behavior.
Update 2 Now this starts to get very interesting. If I run rpm -evv myapp on the rpm built with the Gradle plugin, after installation, and after having added a file to /usr/local/myapp/logs, I see the following:
D: fini 040755 2 (7007, 500) 4096 /usr/local/myapp/logs
D: erase rmdir of /usr/local/myapp/logs failed: Directory not empty
and yet and still, after the operation is complete, the directory is gone!
How can this be? Could there be some configuration of the rpm executable itself that allows the deletion to take place?

Related

GO API installation fails "evq/chromaticity"

I am trying to install chromaticity on my own machine for testing, and no matter what i do i will always hit the error seen in this picture installation error
I dont know why it happened i tried searching but i found nothing online. my question is does anyone know why it happens? or can point me to the right direction? i have checked the folders and yes there are no GO files in there but i dont see why that is a problem
The api could be found here: https://github.com/evq/chromaticity
This is not an issue (as in bug) on the project, rather an issue due to lack of documentation on how to build the project itself.
If you look at the Makefile file on the root directory, you'll notice that static/static.go is a generated file as part of the build process. Such file is usually not committed to the repo so you'll need to build it yourself. To do so, you'll need to have go-bindata installed.
Here's what you need to do in order to build the project successfully:
Get the go-bindata package
go get -u github.com/jteeuwen/go-bindata/...
Get the project
go get github.com/evq/chromaticity
Go the project root directory
cd [...the chromaticity project root..]
Run make to generate the static/static.go file
make
Build/ install the project
go install
Update:
Noticed from your screenshot that you're using Windows, in that case you may need to workaround the issue of running Makefile in Windows. See here for possible solution: How to run a makefile in Windows?
I've run into the same issue when trying to "get" and then install this project. I looked into the code and there is no trace of Asset() function in github.com/evq/chromaticity/static. Moreover git history does not show any .go files in static/ directory. Personally, I would create issue in the project and/or look for different repo containing desired functionality.

When postupgrade is indeed called in MacOSX pkg?

Good morning, I am reading about the prepared scripts in MacOSX to use when creating a pkg for my application.
In particular, I have some doubts how to make sure that postupgrade script is used.
What I read till now is:
from here
The postupgrade script is run after files have been installed and before the postflight script if one is defined. This script is run only if the component has been previously installed. If the script does not return an exit status of zero, Installer will declare the installation has failed.
Ok then it seems that postupgrade will just run in automatic when an upgrade is done. BUT...from man pkgbuild, section --scripts scripts-path
Archive the entire contents of scripts-path as the package scripts. If this directory contains scripts named preinstall and/or postinstall, these will be run as the top-level scripts of the package. If you want to run scripts for specific bundles, you must specify those in a component property list; see more at COMPONENT PROPERTY LIST. Any other files under scripts-path will be used only if the top-level or component-specific scripts invoke them.
So, it seems I should add it to the component.plist, since they do not say anything about postupgrade. BUT it seems strange, I would put there more specific script, not the postupgrade script.
Reading more, I found it that refers to this, where there is written:
To determine whether a Package has already been installed or not, Installer.app is having a look at the content of the following directory: /Library/Receipts. If there's a file named PackageName.pkg within it, then the Package has already been installed, otherwise it's the first install.
Well, my application leaves no pkg file there, but yes it is present in the InstallHistory.plist.
Well, finally the question: should I set the upgrade script somewhere, for example in the component.plist file? The last link seems to be out of date, something has changed? How can I put a pkg file inside /Library/Receipts? Or better, how can be sure if my installation is indeed an installation and not an upgrade, or viceversa?
Thanks everyone, I am a bit confused...

configure command not found cygwin

This question has been asked many time but I am not able to resolve the problem from them so I am asking
I had installed Cygwin a few days ago.I tried using ./configure command but it says
-bash: ./configure: No such file or directory
I tried using
where configure
but I got the output
INFO: Could not find files for the given pattern(s).
then I tried grep configureand I got this output
/etc/bash_completion.d/configure
/usr/i686-pc-cygwin/sys-root/usr/share/libtool/libltdl/configure
/usr/share/ELFIO/configure
/usr/share/libtool/libltdl/configure
I tried to export the path and then run the ./configure but it also didn't worked.
I find no executable file named as configure in my cygwin bin directory.
Does it mean that I have to add configure file manually?How can I correct it?
NOTE :- I had also tried sh configure but it also didn't worked
If a software project is set up to be built using autoconf, that tool generates a script canonically called configure. It queries the system for various parameters that are subsequently used in the build, and is specific to the software package to be built. Different software projects have different configure scripts. They are all called configure, but their contents are not the same.
So, to actually build such a software project once that script was set up (usually done by the maintainers when packaging the source tarball for distribution), you call:
tar xzf <tarball>.gz # or xjf <tarball>.bz2 or whatever
cd <sourcedir> # the one you just untarred
./configure
make
make install
Note the prefix ./, which means "located in this directory" (i.e. the top directory of that project's source tree).
Actually, the better procedure is the so-called "out-of-tree build", when you set up a different directory for the binaries to be built in, so the source tree remains unmodified:
tar xzf <tarball>.gz # or xjf <tarball>.bz2 or whatever
mkdir builddir
cd builddir
../<sourcedir>/configure
make
make install
So, there is supposed to be no configure executable in your PATH, you are supposed to call the script of that name from the source tree you are trying to build from.
If I correctly understood...
Configure is not an application that should be installed on your system, but script that should be delivered with source code to prepare for make command. File named configure should be in the main directory of source code.
I understand that this is an old question. However many might find this solution helpful.
Normally we use the make command to compile a downloaded source in cygwin. In many cases it contains a autogen.sh file. Running that file with
bash autogen.sh
will in many case solve the problem. At least it solved my issue and i could then use the make command

Sourcing common functions in debian maintainer scripts

I have a number of common functions that I would like to source, so it's available in the debian maintainer scripts (preinst/postinst/prerm/postrm), call it common.sh.
If I add "common.sh" to the DEBIAN directory, dpkg complains:
dpkg-deb: warning: conffile '' is not a plain file
dpkg-deb: warning: ignoring 1 warning about the control file(s)
However, the package builds properly.
When I install, it's difficult to find the proper directory where my common.sh exists. In preinst it seems to be looking for /var/lib/dpkg/tmp.ci, while in postinst it seems to look for /var/lib/dpkg/info.
I could stick the common.sh in a tmp directory and delete it later, but I get the feeling that files installed to the OS should remain their until dpkg can remove them.
At any rate, I'm wondering what the true 'debian' way of doing this would be?
The preinst is run from some implementation defined directory because the package isn't unpacked into its proper location in the filesystem yet.
I'm pretty sure forcing extra files into the DEBIAN part is not allowed for standard packages. You could install common.sh into the filesystem, usually under /usr/share/yourpackagename/, and the use it from the postinst and prerm scripts.
It doesn't work for preinst and postrm as the package contents are not available then.

Creating Macports port which doesn't need installation, no dependency, only extract

Goal
I am trying to create a port (Macports) for an open source tool based on Eclipse which doesn't need installation, in other words, it's just "extract and use" case. Users can download the tool from the official project site and use just like that. So there is no DESTROOT variable set.
Since many Mac users got used to the convenience of Macports, I would like to add the tool there, so that users could instantly install or uninstall the tool.
** Important notice: once users start the tool, it creates "/workspace" directory in the same place the tool was installed to keep users' preferences, settings, and other necessary files. So, when users starts the tool, the program should have access to write in the same directory it was installed. The current version of the tool doesn't provide a way to choose the workspace location.
Problem
How should I organize the Portfile?
I have set the following configurations where I tell Macports to not use configure, build, and destroot phases.
set cm_workspace /workspace
universal_variant no
use_configure no
supported_archs noarch
post-extract {
file mkdir ${worksrcpath}${cm_workspace}
destroot.keepdirs-append ${worksrcpath}${cm_workspace}
}
build {}
destroot {}
As I understand,
extract phase untars the file,
and install phases should archive those files,
and finally activate phase should move the files to the destroot.
But I keep getting errors.
---> Extracting cubridmanager
---> Configuring cubridmanager
---> Building cubridmanager
---> Staging cubridmanager into destroot
Error: No files have been installed in the destroot directory!
Error: Please make sure that this software supports 'make install DESTDIR=${destroot}' or implement an alternative destroot mechanism in the Portfile.
Error: Files might have been installed directly into your system, check before proceeding.
Error: Target org.macports.destroot returned: Staging cubridmanager into destroot failed
Log for cubridmanager is at: /opt/local/var/macports/logs/_Users_nbp_macports_databases_cubridmanager/cubridmanager/main.log
Error: Status 1 encountered during processing.
To report a bug, see <http://guide.macports.org/#project.tickets>
I want to contribute to that open source community, but I can't pass this step.
You misunderstood the phases, the usual workflow is as follows:
extract untars the downloaded file
patch applies any local patches
configure runs ./configure
build runs make
destroot runs make install DESTDIR=${destroot}
install packs the file in the destroot area into an archive
activate moves the files into ${prefix}
So, in your case, you don't need steps 2, 3 and 4. But you still need to copy the files to the destroot area in step 5, the destroot phase. Otherwise MacPorts does not know which files it is supposed to install.
supported_archs noarch
use_configure no
build {}
destroot {
copy ${worksrcpath} ${destroot}${prefix}/some/path
}
Note that MacPorts does discourage installing files outside the prefix directory, as the installation is meant to be self-contained. The path /workspace sounds like a pretty bad idea. Rather, you should use a path inside the users home directory to save any data as otherwise this cannot be used on a computer with multiple user accounts. Of course, the actual executable files can reside in the MacPorts prefix.
Normally, UNIX software separates binaries, libraries and shared data in /usr (or in the MacPorts case,/opt/local) from user-specific data in the home directory. If your tool does not follow this convention, this needs to be fixed by the developers first.
I don't think that tool fits with macports for related reasons
All files from macports should be in one of the supported directories i.e. destroot and ending up in /opt/local
The project tries to write to sub directories which is not good here
The directories written to bu macports can only be written to by the user macports so as to minimize the ability to affect the build and run environment.
In a multiuser system who owns the directory to write to? e.g. macports are installed as user macports and are run as someone else - Also if there are more than one normal user who writes to the directory?
I think you need to patch the tool so that it is passed a directory to create the workspace in when a normal user runs it but the tool is install as ownwd by macports in /opt/local/bin

Resources