It's possible to create an executable binary from a RHEL6 .RPM file and dependences? - binaryfiles

There is an Red Hat EL06 server on which I need to "install" a package that is not part of the official repository.
So far so good, I'd download the .rpm file manually, I'd satisfy the dependencies and I'd install.
The problem is that I do not have access to root and the administrator does not install packages that are not official.
However, if I have a binary with execution permission in my home dir, I will be able to execute and my need will be met.
So I ask, is it possible to generate a directory containing all dependencies of libs + binary executable, starting from an RPM file? How do you do this?

Related

Installing an RPM in Yocto

So I have a Yocto build and I need to install this 3rd party RPM. I've created a recipe for it using the link to the source RPM. Problem is when implementing the do_install() function.
I need to install it, and it's installed via rpm --install rpm_package, and then I need to enable the service.
For the service I know I have to inherit systemd in my recipe file, but for the installation I'm still confused.
The install command only creates directories and copies files over.
Any help is appreciated.
Indeed do_install only create install directories in your /tmp/work/cortex[...]/my_recipe_name/ directory
Normally if you bitbake your image which include your recipe you should have a warning like:
Files/directories were installed but not shipped in any package
So, after the installation you need to "package them" in order to be in your final linux image, to do so use FILES_$PN like below for instance:
FILES_${PN}_append = " /usr/sbin/"
where /usr/bin is the directory which contain what you want to "install" in your image.
Then the package will be shipped in your final image.
For the service, indeed inherit systemd is mandatory + you have to add in do_install
install -Dm0644 ${WORKDIR}/my_script.service ${D}${systemd_system_unitdir}/my_script.service
and at the end of your .bbfile
SYSTEMD_SERVICE_${PN} = "my_script.service"
You can install packages at runtime with rpm, but it is not recommended for production builds, as that is the whole idea behind Yocto which is creating a custom distribution with all your needs integrated.
In order to use rpm command in runtime you need to configure it to fetch the sources from the right location. And the right location is generally a Yocto build, because it is compatible with your board's architecture and system (Otherwise, you need to handle that manually).
You can link the Yocto build rpm files to the board's rpm command using smart, for more info check first the official Yocto documentation here.
Also, you can check more example here and here.
IMPORTANT
I do not recommend creating a systemd/sysvinit service to install an rpm/deb/ipk/tar package.
The idea is, if you install a package with rpm it means that it is already supported by Yocto and has a recipe. So, simply:
IMAGE_INSTALL_append = " package"

Installing without package manager, why does executable binary fail with "command not found" unless I make the commands start with "./"?

I'm learning to use GNU/Linux and I want to know how to install programs that cannot be installed with the package manager.
I downloaded the tarball with the Linux 64-bit Binaries (including one called "haxelib"), extracted it, changed directory in the terminal to their location (~/Downloads/things/haxe_20201231082044_5e33a78aa/), and used chmod to make them executable.
If I try a command such as haxelib list, then the terminal returns
haxelib: command not found
If I try ./haxelib list (the same command but with ./ at the start) instead, then the command works as expected.
Why can't I use it without the ./? Programs installed with the package manager can be used without the ./.
Edit: I should probably also ask: where should I put the files from the tarball? Should they all go together in the same place? I have a feeling that a folder named "things" in my Downloads folder is not the best place for them.

Differences between RPM created with rpmbuild and Os-nebula-rpm plugin

I am trying to use the Nebula rpm plugin for Gradle to build RPMs. I am finding the following discrepancy between RPMs built this way and RPMs built the traditional way, with spec files and rpmbuild.
In a spec file, you might have something like this:
%dir /usr/local/myapp/logs
This would create the directory /usr/local/myapp/logs when the rpm is installed. Once myapp starts to run it would write logs to this directory. When the app is uninstalled, rpm would understand that the files under /usr/local/myapp/logs were not created by the rpm installation process and therefore not delete this directory or the files within it. If the directory were empty at the time of uninstallation, then the directory would be removed.
There is a similar directive with the Gradle plugin. If you include
directory('/usr/local/myapp/logs')
in the build script, this directory will be created similar to the rpm process. However, in this instance, when the rpm is uninstalled, the directory and any files that have been added within it since installation will be removed.
I am trying to account for this difference. The RPM plugin is based on the redline-rpm java package, and from looking at the source, and the usual RedHat rpm documentation, I cannot find any setting that governs this behavior.
Can anyone hazard a guess what might be going on here to create this difference in behavior?
Update: this post has some pretty good information on how this works, but I still don't know the name of any directive that alters this behavior.
Update 2 Now this starts to get very interesting. If I run rpm -evv myapp on the rpm built with the Gradle plugin, after installation, and after having added a file to /usr/local/myapp/logs, I see the following:
D: fini 040755 2 (7007, 500) 4096 /usr/local/myapp/logs
D: erase rmdir of /usr/local/myapp/logs failed: Directory not empty
and yet and still, after the operation is complete, the directory is gone!
How can this be? Could there be some configuration of the rpm executable itself that allows the deletion to take place?

CMake: Where to install FooBarConfig.cmake and FooBarConfigVersion.cmake?

Let be a library A that I compile with CMake. I also want to distribute it via a package (e.g. RPM).
Where should my package install the files AConfig.cmake and AConfigVersion.cmake ?
In /usr/share/cmake/Modules on Linux ?
You should find what you need here:
http://www.cmake.org/Wiki/CMake/Tutorials/Packaging
With the relevant portion of the text:
Consider a project "Foo" that installs the following files:
<prefix>/include/foo-1.2/foo.h
<prefix>/lib/foo-1.2/libfoo.a
It may also provide a CMake package configuration file
<prefix>/lib/foo-1.2/foo-config.cmake
The config files need the be in your install tree. Only the FindXXX.cmake file should go in the modules directory.

Creating Macports port which doesn't need installation, no dependency, only extract

Goal
I am trying to create a port (Macports) for an open source tool based on Eclipse which doesn't need installation, in other words, it's just "extract and use" case. Users can download the tool from the official project site and use just like that. So there is no DESTROOT variable set.
Since many Mac users got used to the convenience of Macports, I would like to add the tool there, so that users could instantly install or uninstall the tool.
** Important notice: once users start the tool, it creates "/workspace" directory in the same place the tool was installed to keep users' preferences, settings, and other necessary files. So, when users starts the tool, the program should have access to write in the same directory it was installed. The current version of the tool doesn't provide a way to choose the workspace location.
Problem
How should I organize the Portfile?
I have set the following configurations where I tell Macports to not use configure, build, and destroot phases.
set cm_workspace /workspace
universal_variant no
use_configure no
supported_archs noarch
post-extract {
file mkdir ${worksrcpath}${cm_workspace}
destroot.keepdirs-append ${worksrcpath}${cm_workspace}
}
build {}
destroot {}
As I understand,
extract phase untars the file,
and install phases should archive those files,
and finally activate phase should move the files to the destroot.
But I keep getting errors.
---> Extracting cubridmanager
---> Configuring cubridmanager
---> Building cubridmanager
---> Staging cubridmanager into destroot
Error: No files have been installed in the destroot directory!
Error: Please make sure that this software supports 'make install DESTDIR=${destroot}' or implement an alternative destroot mechanism in the Portfile.
Error: Files might have been installed directly into your system, check before proceeding.
Error: Target org.macports.destroot returned: Staging cubridmanager into destroot failed
Log for cubridmanager is at: /opt/local/var/macports/logs/_Users_nbp_macports_databases_cubridmanager/cubridmanager/main.log
Error: Status 1 encountered during processing.
To report a bug, see <http://guide.macports.org/#project.tickets>
I want to contribute to that open source community, but I can't pass this step.
You misunderstood the phases, the usual workflow is as follows:
extract untars the downloaded file
patch applies any local patches
configure runs ./configure
build runs make
destroot runs make install DESTDIR=${destroot}
install packs the file in the destroot area into an archive
activate moves the files into ${prefix}
So, in your case, you don't need steps 2, 3 and 4. But you still need to copy the files to the destroot area in step 5, the destroot phase. Otherwise MacPorts does not know which files it is supposed to install.
supported_archs noarch
use_configure no
build {}
destroot {
copy ${worksrcpath} ${destroot}${prefix}/some/path
}
Note that MacPorts does discourage installing files outside the prefix directory, as the installation is meant to be self-contained. The path /workspace sounds like a pretty bad idea. Rather, you should use a path inside the users home directory to save any data as otherwise this cannot be used on a computer with multiple user accounts. Of course, the actual executable files can reside in the MacPorts prefix.
Normally, UNIX software separates binaries, libraries and shared data in /usr (or in the MacPorts case,/opt/local) from user-specific data in the home directory. If your tool does not follow this convention, this needs to be fixed by the developers first.
I don't think that tool fits with macports for related reasons
All files from macports should be in one of the supported directories i.e. destroot and ending up in /opt/local
The project tries to write to sub directories which is not good here
The directories written to bu macports can only be written to by the user macports so as to minimize the ability to affect the build and run environment.
In a multiuser system who owns the directory to write to? e.g. macports are installed as user macports and are run as someone else - Also if there are more than one normal user who writes to the directory?
I think you need to patch the tool so that it is passed a directory to create the workspace in when a normal user runs it but the tool is install as ownwd by macports in /opt/local/bin

Resources