How to make AppDir files available to an AppImage application? - appimage

My build system, which uses meson, puts some files my application needs on AppDir under AppDir/usr/share/myapp/resources. The application needs both, read and write to those files when it is running. The files are in AppDir when I look at it, but when the .AppImage is generated, the standalone running executable cannot access those files. When integrating the application with the desktop, the application gets installed in ~/Applications, but it doesn't contain those files.
Here is a visualization of how it looks when the application is installed on the system without using AppImage (ninja install)
πŸ—€ usr
πŸ—€ share
πŸ—€ myapp
πŸ—€ resources
πŸ–Ή MainWindow.glade
πŸ–Ή dataCache.json
When I do DESTDIR=AppDir ninja install the structure ends like this
πŸ—€ AppDir
πŸ—€ usr
πŸ—€ share
πŸ—€ myapp
πŸ—€ resources
πŸ–Ή MainWindow.glade
πŸ–Ή dataCache.json
When the application (MyApp.AppImage) is integrated into the user's desktop with AppImageLauncher, it only copies the AppImage into the Applications directory. There are no other folders or files.
Edit: I am using ./linuxdeploy-x86_64.AppImage --appdir AppDir to create the directory AppDir. Then I use DESTDIR=AppDir ninja install to install the app to AppDir, and then I use ./linuxdeploy-x86_64.AppImage --appdir AppDir --output appimage to create the AppImage
How would one go to access those files that were in AppDir once the app is bundled? Or how does one make the app integration copy those files to the Applications folder so that the application can have access to them while the application is running?

To resolve the AppImage mount point at runtime you can use the APPDIR environment variable. For example, if you want to resolve usr/share/icons/hicolor/myicon.png you need to use the following path $APPDIR/usr/share/icons/hicolor/myicon.png.
It's recommended that you modify the application to be able to resolve its resources depending on the binary location. As an alternative, you can use a custom environment variable to set up the path or a configuration file next to your main binary.
Regarding writing files inside the AppImage. This is not possible by design. An AppImage is a read-only SquashFS image that is mounted at runtime. Any application data should be written to $HOME/.config or $HOME/.local/share depending on whether it's a configuration data or other kind of data. The recommended workflow is to copy such data on the first run.
For more information about whether to copy your application data see https://www.freedesktop.org/wiki/Software/xdg-user-dirs/

Related

ImportRDF command uses appdata/local instead of appdata/roaming for repository location

Installed/running Ontotext GraphDB v10.1.0 (free desktop windows). All working fine, create repositories, run SPARQL, etc.
The server and UI are both loading/running/reporting repositories in the C:\Users<Username>\AppData\Roaming\Graph\data\repositories folder.
However, when running the ImportRdf.cmd utility, its "attaching to"/creating the repository in C:\Users<Username>\AppData\Local\Graph\data\repositories folder instead!?
Tried adding the correct path into C:\Users<user>\AppData\Local\GraphDB Desktop\app\GraphDB Desktop.cfg but makes no difference.
Anyone experienced this/got any fixes?
The data /repository/ directory can be set through the system or config property graphdb.home.data. The default value is the data subdirectory relative to the GraphDB home directory. For example, one way to configure it: Go in bin folder of graphdb distribution and start graphdb with the following command:
./graphdb -Dgraphdb.home="full path to where you want your repo directory".

set execution permission to files deployed from windows to lambda using serverless

I'm using serveless to deploy lambda function, I need to add an executable bin file but when it is uploaded I don't have executable permissions, also I can't change permissions after deployed, the only thing I can do is to move the file to /tmp and there change the permissions, it works ok but adds a lot of overhead because I have to move the files on every Invoke becasue /tmp is ephemeral.
I know there is a known issue that windows&linux files permission are different, so if you zip a file on windows and unzip it on a linux machines you will have problem with permission, especialy with execution, and that happens when serverless deploys the files.
ΒΏAnyone have a better workaround for this? (rather than "deploy from a windows machine")

Where files are allocated in Parse Server?

I'm implementing an instance of Parse Server, I want know where the Parse Server Allocated the files ?
According to File Adapter, the default file storage is GridFS in mongodb.
Depends on the operating system and type of installation you used.
If installed on a linux/unix using the global install npm install -g parse-server mongodb-runner then your parse-server files will normally be under usr/lib/node_modules/parse-server. ( may differ from linux versions )
be careful when editing these files for hot hacks or modifications. If you later choose to upgrade parse-server they will be overwritten.
Your cloud file directly is normally created by you. So this could be home/parse/cloud/main.js. This can be in any location of your choice. To set a new location you will set that in the index file or json (depending on your startup process ).
cloud: '/home/myApp/cloud/main.js', // Absolute path to your Cloud Code
If you installed not using the global install, then obviously you would need to cd to where you cloned the project.
Windows would be similar. Clone (or download the zip) parse-server from the repo. Open a console window and β€œcd” to the folder where you have cloned/extracted the example server, eq:
cd "C:\parse-server"
Here is where the files will sit on the parse-server. Hopes this helps!

How to use Ambari service to deploy a jar on all hadoop nodes?

I have a requirement where I want to deploy a jar file at a particular location on all hadoop cluster nodes using Ambari server. For that purpose I think I can use service feature.
So I created a sample service and could deploy it as client or slave on all nodes.
I added a new folder as Testservice inside /var/lib/ambari-server/resources/stacks/HDP/2.2/services/ and it has following files/directories
[machine]# cd /var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice^C
[machine]#
[machine]# pwd
/var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice
[machine]# ls
configuration metainfo.xml package
[machine]# ls package/*
package/archive.zip
package/files:
filesmaster.py test1.jar
package/scripts:
test_client.py
[machine]#
With this my service is added and installed on all nodes. On each node, a respective directory "/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/Testservice" is created with same file structure as mentioned above. As of now test_client.py script has no code at all. Just dummy implementation of install, configure function.
So here I want to add code such that package/files/test1.jar from each host to a defined destination location say "/lib folder.
I need help on this point. How I can make use test_client.py script? How I can write generic code to copy my jar file.
test_client.py has install method as shown below
class TestClient(Script):
def install(self, env):
Need more details how env variable can be used to get all required base paths for ambari service directory and hadoop install base paths.
You are correct in thinking that you can use a Custom Ambari Service to ensure a file is present on various nodes in your cluster. Your custom service should have a CLIENT component which handles laying down the files you need on various hosts in the cluster. It should be a client component because it has no running processes.
However, using the files folder is not the correct approach to distribute the file you have (test1.jar). All the Ambari services rely on linux packages to install the necessary files on the system. So what you should be doing is creating a software package that takes care of laying down that lib file to the correct location on disk. This could be an rpm and/or deb file depending on what OSs you are planning to support. Once you have the software package you can accomplish your goal by modifying two files you already have outlined above.
metainfo.xml - You will list the necessary software packages required for your service to function correctly. For example if you were planning on supporting RHEL6 and RHEL7 you would create an rpm package named my_package_name and include it with this code:
<osSpecifics>
<osSpecific>
<osFamily>redhat6,redhat7</osFamily>
<packages>
<package>
<name>my_package_name</name>
</package>
</packages>
</osSpecific>
</osSpecifics>
test-client.py - You will need to replace the starter code you have in your question with:
class TestClient(Script):
def install(self, env):
self.install_packages(env)
The self.install_packages(env) call will ensure that the packages you have listed in metainfo.xml file get installed when your custom service CLIENT component is installed.
Note: Your software package (rpm, deb, etc.) will have to be hosted in an online repository in order for Ambari to access it and install it. You could create a local repository on the node running Ambari Server using httpd and createrepo. This process can be gleaned from the HDP Documentation.
Alternative approach (Not Recommended)
Now that I have explained the way it SHOULD be done. Let me tell you how you can achieve this using the package/files folder. Again this is not the recommended approach to handle installing software on a linux system, the package management system for your distribution should be handling this.
test-client.py - Update your starter file to include the below content. For this example we will copy your test1.jar to /lib folder with file permissions 0664, owner of 'guest', and group 'hadoop':
def configure(self,env):
File("/lib/test1.jar",
mode=0644,
group="hadoop",
owner="guest",
content=StaticFile("test1.jar")
)
Why is this approach not recommended? This is not recommended because installing software on a linux distribution should be managed so that it makes it easy to upgrade and remove said software. Ambari does not have full uninstall functionality when it comes to its services. The most you can do is remove a service from being managed in your ambari cluster, after doing so all those files will remain on the system and would have to be removed by writing a custom script or doing it manually. However if you used package managment to handle installing the files you could easily remove the software by using the same package management system.

Remove execute permission on file downloaded on a Mac

We have a web app running on a Windows server, which allows a user to do some processing and download the results. The result is a set of files which are dynamically created on the server and zipped into a single file for facilitating the download process.
Everything works fine on Windows, but when users download the file from the web app on a Mac, the contents of the zip file have the execute (chmod +x) permission set (I presume that the same happens on *NIX and Linux machines). This can, of course, be removed by running the 'chmod -x' command, but is there a way by which one can remove the execute permission on the files, so that when downloaded on a Mac, the files don't have the execute permission set by default?
I believe it's not possible - .zip files don't contain permissions, so on a Mac it has to default to "most permissive" (otherwise it's possible that there are applications inside the zip that wouldn't be marked as executable when they need to be).
tars, for instance, do record permissions, but that'd be a bit more difficult to create on a Windows server.

Resources