How to use Ambari service to deploy a jar on all hadoop nodes? - hortonworks-data-platform

I have a requirement where I want to deploy a jar file at a particular location on all hadoop cluster nodes using Ambari server. For that purpose I think I can use service feature.
So I created a sample service and could deploy it as client or slave on all nodes.
I added a new folder as Testservice inside /var/lib/ambari-server/resources/stacks/HDP/2.2/services/ and it has following files/directories
[machine]# cd /var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice^C
[machine]#
[machine]# pwd
/var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice
[machine]# ls
configuration metainfo.xml package
[machine]# ls package/*
package/archive.zip
package/files:
filesmaster.py test1.jar
package/scripts:
test_client.py
[machine]#
With this my service is added and installed on all nodes. On each node, a respective directory "/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/Testservice" is created with same file structure as mentioned above. As of now test_client.py script has no code at all. Just dummy implementation of install, configure function.
So here I want to add code such that package/files/test1.jar from each host to a defined destination location say "/lib folder.
I need help on this point. How I can make use test_client.py script? How I can write generic code to copy my jar file.
test_client.py has install method as shown below
class TestClient(Script):
def install(self, env):
Need more details how env variable can be used to get all required base paths for ambari service directory and hadoop install base paths.

You are correct in thinking that you can use a Custom Ambari Service to ensure a file is present on various nodes in your cluster. Your custom service should have a CLIENT component which handles laying down the files you need on various hosts in the cluster. It should be a client component because it has no running processes.
However, using the files folder is not the correct approach to distribute the file you have (test1.jar). All the Ambari services rely on linux packages to install the necessary files on the system. So what you should be doing is creating a software package that takes care of laying down that lib file to the correct location on disk. This could be an rpm and/or deb file depending on what OSs you are planning to support. Once you have the software package you can accomplish your goal by modifying two files you already have outlined above.
metainfo.xml - You will list the necessary software packages required for your service to function correctly. For example if you were planning on supporting RHEL6 and RHEL7 you would create an rpm package named my_package_name and include it with this code:
<osSpecifics>
<osSpecific>
<osFamily>redhat6,redhat7</osFamily>
<packages>
<package>
<name>my_package_name</name>
</package>
</packages>
</osSpecific>
</osSpecifics>
test-client.py - You will need to replace the starter code you have in your question with:
class TestClient(Script):
def install(self, env):
self.install_packages(env)
The self.install_packages(env) call will ensure that the packages you have listed in metainfo.xml file get installed when your custom service CLIENT component is installed.
Note: Your software package (rpm, deb, etc.) will have to be hosted in an online repository in order for Ambari to access it and install it. You could create a local repository on the node running Ambari Server using httpd and createrepo. This process can be gleaned from the HDP Documentation.
Alternative approach (Not Recommended)
Now that I have explained the way it SHOULD be done. Let me tell you how you can achieve this using the package/files folder. Again this is not the recommended approach to handle installing software on a linux system, the package management system for your distribution should be handling this.
test-client.py - Update your starter file to include the below content. For this example we will copy your test1.jar to /lib folder with file permissions 0664, owner of 'guest', and group 'hadoop':
def configure(self,env):
File("/lib/test1.jar",
mode=0644,
group="hadoop",
owner="guest",
content=StaticFile("test1.jar")
)
Why is this approach not recommended? This is not recommended because installing software on a linux distribution should be managed so that it makes it easy to upgrade and remove said software. Ambari does not have full uninstall functionality when it comes to its services. The most you can do is remove a service from being managed in your ambari cluster, after doing so all those files will remain on the system and would have to be removed by writing a custom script or doing it manually. However if you used package managment to handle installing the files you could easily remove the software by using the same package management system.

Related

How to make AppDir files available to an AppImage application?

My build system, which uses meson, puts some files my application needs on AppDir under AppDir/usr/share/myapp/resources. The application needs both, read and write to those files when it is running. The files are in AppDir when I look at it, but when the .AppImage is generated, the standalone running executable cannot access those files. When integrating the application with the desktop, the application gets installed in ~/Applications, but it doesn't contain those files.
Here is a visualization of how it looks when the application is installed on the system without using AppImage (ninja install)
🗀 usr
🗀 share
🗀 myapp
🗀 resources
🖹 MainWindow.glade
🖹 dataCache.json
When I do DESTDIR=AppDir ninja install the structure ends like this
🗀 AppDir
🗀 usr
🗀 share
🗀 myapp
🗀 resources
🖹 MainWindow.glade
🖹 dataCache.json
When the application (MyApp.AppImage) is integrated into the user's desktop with AppImageLauncher, it only copies the AppImage into the Applications directory. There are no other folders or files.
Edit: I am using ./linuxdeploy-x86_64.AppImage --appdir AppDir to create the directory AppDir. Then I use DESTDIR=AppDir ninja install to install the app to AppDir, and then I use ./linuxdeploy-x86_64.AppImage --appdir AppDir --output appimage to create the AppImage
How would one go to access those files that were in AppDir once the app is bundled? Or how does one make the app integration copy those files to the Applications folder so that the application can have access to them while the application is running?
To resolve the AppImage mount point at runtime you can use the APPDIR environment variable. For example, if you want to resolve usr/share/icons/hicolor/myicon.png you need to use the following path $APPDIR/usr/share/icons/hicolor/myicon.png.
It's recommended that you modify the application to be able to resolve its resources depending on the binary location. As an alternative, you can use a custom environment variable to set up the path or a configuration file next to your main binary.
Regarding writing files inside the AppImage. This is not possible by design. An AppImage is a read-only SquashFS image that is mounted at runtime. Any application data should be written to $HOME/.config or $HOME/.local/share depending on whether it's a configuration data or other kind of data. The recommended workflow is to copy such data on the first run.
For more information about whether to copy your application data see https://www.freedesktop.org/wiki/Software/xdg-user-dirs/

Where files are allocated in Parse Server?

I'm implementing an instance of Parse Server, I want know where the Parse Server Allocated the files ?
According to File Adapter, the default file storage is GridFS in mongodb.
Depends on the operating system and type of installation you used.
If installed on a linux/unix using the global install npm install -g parse-server mongodb-runner then your parse-server files will normally be under usr/lib/node_modules/parse-server. ( may differ from linux versions )
be careful when editing these files for hot hacks or modifications. If you later choose to upgrade parse-server they will be overwritten.
Your cloud file directly is normally created by you. So this could be home/parse/cloud/main.js. This can be in any location of your choice. To set a new location you will set that in the index file or json (depending on your startup process ).
cloud: '/home/myApp/cloud/main.js', // Absolute path to your Cloud Code
If you installed not using the global install, then obviously you would need to cd to where you cloned the project.
Windows would be similar. Clone (or download the zip) parse-server from the repo. Open a console window and “cd” to the folder where you have cloned/extracted the example server, eq:
cd "C:\parse-server"
Here is where the files will sit on the parse-server. Hopes this helps!

run.as option does not work other than Nifi user

I want to run my NiFi application using ec2-user rather than default nifi user. I changed run.as=ec2-user in bootstrap.conf but it did not worked .It is not allowing me to start Nifi application getting following error while staring nifi service.
./nifi.sh start
nifi.sh: JAVA_HOME not set; results may vary
Java home:
NiFi home: /opt/nifi/current
Bootstrap Config File: /opt/nifi/current/conf/bootstrap.conf
User Runnug Nifi Application : sudo -u ec2-user
Error: Could not find or load main class org.apache.nifi.bootstrap.RunNiFi
Any pointer to this issue?
This is most likely a file permission problem, which is not covered by installing the service with nifi.sh install. A summary of the required permissions includes:
Read access to the entire distribution in the NIFI_HOME directory
Write access to the NIFI_HOME directory itself - NiFi will create a number of directories and files at runtime including logs, work, state, and various repositories.
Write access to the bin directory
Write access to the conf directory
Write access to the lib directory, and to all of the files in the lib directory
It is certainly possible to narrow the permissions by creating the working directories manually, and by adjusting NiFi's settings to rearrange the directory layout. But the permissions above should get you started.

Where is ejabberd server configuration file on Mac OS?

I have installed the Jabbered 15.07 on my MAC OS. After installation, I want to config it by editing the /Applications/ejabberd-15.07/conf/ejabberd.yml. I am not sure whether it is the file I should change, I searched the Internet and found that sb said the configuration is in the folder /etc but I did not found it there. In order to prove that it is the file I want to find, I open the admin interface and add a record in the "ACL" screen. And after that I checked the ejabberd.yml, but it remained unchanged. So is it the configuration file of ejabberd, if it is not which file it should be and how to configure it?
The location of the config file depend on how you installed ejabberd.
Apparently, you used binary installer, not make install, so config file is as your expected:
/Applications/ejabberd-15.07/conf/ejabberd.yml
Admin interface does not change the config file but write in mnesia database. You could configure ejabberd so that database override config file, but this is not a good practice. To make change permanent, you need to edit ejabberd.yml file.
Note: You should use latest ejabberd published version if you are starting today (15.11).

How to setup Pydevd remote debugging with Heroku

According to this answer I am required to copy the pycharm-debug.egg file to my server, how do I accomplish this with a Heroku app so that I can remotely debug it using Pycharm?
Heroku doesn't expose the File system it uses for running web dyno to users. Means you can't copy the file to the server via ssh.
So, you can do this by following 2 ways:
The best possible way to do this, is by adding this egg file into requirements, so that during deployment it gets installed into the environment hence automatically added to python path. But this would require the package to be pip indexed
Or, Commit this file in your code base, hence when you deploy the file reaches the server.
Also, in the settings file of your project if using django , add this file to python path:
import sys
sys.path.append(relative/path/to/file)

Resources