get Websphere installed application mapped shared lIbrary using jython script - websphere

I want to list the current mapped Shared libraries for specific application name
this is how i use for Shared lib mapping
AdminApp.edit(''+eval("app_Name")+'', '[ -MapSharedLibForMod [[ "'+eval("app_Name")+'" META-INF/application.xml "'+"shared_lib_name+'"]]]' )

Related

How to make AppDir files available to an AppImage application?

My build system, which uses meson, puts some files my application needs on AppDir under AppDir/usr/share/myapp/resources. The application needs both, read and write to those files when it is running. The files are in AppDir when I look at it, but when the .AppImage is generated, the standalone running executable cannot access those files. When integrating the application with the desktop, the application gets installed in ~/Applications, but it doesn't contain those files.
Here is a visualization of how it looks when the application is installed on the system without using AppImage (ninja install)
🗀 usr
🗀 share
🗀 myapp
🗀 resources
🖹 MainWindow.glade
🖹 dataCache.json
When I do DESTDIR=AppDir ninja install the structure ends like this
🗀 AppDir
🗀 usr
🗀 share
🗀 myapp
🗀 resources
🖹 MainWindow.glade
🖹 dataCache.json
When the application (MyApp.AppImage) is integrated into the user's desktop with AppImageLauncher, it only copies the AppImage into the Applications directory. There are no other folders or files.
Edit: I am using ./linuxdeploy-x86_64.AppImage --appdir AppDir to create the directory AppDir. Then I use DESTDIR=AppDir ninja install to install the app to AppDir, and then I use ./linuxdeploy-x86_64.AppImage --appdir AppDir --output appimage to create the AppImage
How would one go to access those files that were in AppDir once the app is bundled? Or how does one make the app integration copy those files to the Applications folder so that the application can have access to them while the application is running?
To resolve the AppImage mount point at runtime you can use the APPDIR environment variable. For example, if you want to resolve usr/share/icons/hicolor/myicon.png you need to use the following path $APPDIR/usr/share/icons/hicolor/myicon.png.
It's recommended that you modify the application to be able to resolve its resources depending on the binary location. As an alternative, you can use a custom environment variable to set up the path or a configuration file next to your main binary.
Regarding writing files inside the AppImage. This is not possible by design. An AppImage is a read-only SquashFS image that is mounted at runtime. Any application data should be written to $HOME/.config or $HOME/.local/share depending on whether it's a configuration data or other kind of data. The recommended workflow is to copy such data on the first run.
For more information about whether to copy your application data see https://www.freedesktop.org/wiki/Software/xdg-user-dirs/

Running Jar in Remote Access VS Jar in Shared Folder?

I have a jar in SERVER-A that reads from a file within SERVER-A. When I Remote SERVER-A (Remote Desktop Connection) and run it, it works fine.
But when I run the same Jar from a shared folder of SERVER-A via my local machine(not SERVER-A) it can not find the file.
I tried to print the current directory of the jar when its is running.
1. When running from Remote access to SERVER-A i got "E:\ISO_Tester", This is the correct path.
2. When running from Shared folder of SERVER-A i got "C:\Windows" which I think is from my local machine instead of SERVER-A.
How can I make the jar read from the server when its being accessed from shared folder?
P.S. I'm using environment variable for the file location to be read.
String propFile=System.getenv("OTHERS_HOME") + "\\conf\\FILE_TO_READ.txt";

How to use Ambari service to deploy a jar on all hadoop nodes?

I have a requirement where I want to deploy a jar file at a particular location on all hadoop cluster nodes using Ambari server. For that purpose I think I can use service feature.
So I created a sample service and could deploy it as client or slave on all nodes.
I added a new folder as Testservice inside /var/lib/ambari-server/resources/stacks/HDP/2.2/services/ and it has following files/directories
[machine]# cd /var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice^C
[machine]#
[machine]# pwd
/var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice
[machine]# ls
configuration metainfo.xml package
[machine]# ls package/*
package/archive.zip
package/files:
filesmaster.py test1.jar
package/scripts:
test_client.py
[machine]#
With this my service is added and installed on all nodes. On each node, a respective directory "/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/Testservice" is created with same file structure as mentioned above. As of now test_client.py script has no code at all. Just dummy implementation of install, configure function.
So here I want to add code such that package/files/test1.jar from each host to a defined destination location say "/lib folder.
I need help on this point. How I can make use test_client.py script? How I can write generic code to copy my jar file.
test_client.py has install method as shown below
class TestClient(Script):
def install(self, env):
Need more details how env variable can be used to get all required base paths for ambari service directory and hadoop install base paths.
You are correct in thinking that you can use a Custom Ambari Service to ensure a file is present on various nodes in your cluster. Your custom service should have a CLIENT component which handles laying down the files you need on various hosts in the cluster. It should be a client component because it has no running processes.
However, using the files folder is not the correct approach to distribute the file you have (test1.jar). All the Ambari services rely on linux packages to install the necessary files on the system. So what you should be doing is creating a software package that takes care of laying down that lib file to the correct location on disk. This could be an rpm and/or deb file depending on what OSs you are planning to support. Once you have the software package you can accomplish your goal by modifying two files you already have outlined above.
metainfo.xml - You will list the necessary software packages required for your service to function correctly. For example if you were planning on supporting RHEL6 and RHEL7 you would create an rpm package named my_package_name and include it with this code:
<osSpecifics>
<osSpecific>
<osFamily>redhat6,redhat7</osFamily>
<packages>
<package>
<name>my_package_name</name>
</package>
</packages>
</osSpecific>
</osSpecifics>
test-client.py - You will need to replace the starter code you have in your question with:
class TestClient(Script):
def install(self, env):
self.install_packages(env)
The self.install_packages(env) call will ensure that the packages you have listed in metainfo.xml file get installed when your custom service CLIENT component is installed.
Note: Your software package (rpm, deb, etc.) will have to be hosted in an online repository in order for Ambari to access it and install it. You could create a local repository on the node running Ambari Server using httpd and createrepo. This process can be gleaned from the HDP Documentation.
Alternative approach (Not Recommended)
Now that I have explained the way it SHOULD be done. Let me tell you how you can achieve this using the package/files folder. Again this is not the recommended approach to handle installing software on a linux system, the package management system for your distribution should be handling this.
test-client.py - Update your starter file to include the below content. For this example we will copy your test1.jar to /lib folder with file permissions 0664, owner of 'guest', and group 'hadoop':
def configure(self,env):
File("/lib/test1.jar",
mode=0644,
group="hadoop",
owner="guest",
content=StaticFile("test1.jar")
)
Why is this approach not recommended? This is not recommended because installing software on a linux distribution should be managed so that it makes it easy to upgrade and remove said software. Ambari does not have full uninstall functionality when it comes to its services. The most you can do is remove a service from being managed in your ambari cluster, after doing so all those files will remain on the system and would have to be removed by writing a custom script or doing it manually. However if you used package managment to handle installing the files you could easily remove the software by using the same package management system.

Vagrant file structure and web root

I've read the docs and a few things still confuse me, mostly related to sync folders and database data.
I want to use the following folder structure on my host machine
ROOT
|- workFolder
||- project1
|||- project1DatabaseAndFiles
|||- project1WebRoot
||- project2
|||- project2DatabaseAndFiles
|||- project2WebRoot
||- project3
|||- project3DatabaseAndFiles
|||- project3WebRoot
And then create VM's where each VM host webroot points to the appropriate projectX/projectXWebRoot folder.
From what I've read, I can only specify one remote Sync DIR. (http://docs.vagrantup.com/v2/synced-folders/). But if I create a new VM I want to specify the project name too, thereby selecting the correct host folder.
Is what I'm describing possible using Vagrant?
If I wanted another developer to use this environment, I'd like for them to have instant access to the database structure/setup etc without having to import any SQL files. Is this possible?
I'm hoping I'm just not understanding Vagrants purpose, but this seems like a good use of shared VM's to me. Any pointers or articles that might help would be very welcome.
From what I've read, I can only specify one remote Sync DIR.
No that is not true. You can always add more shared folders. From the
manual:
This directive is used to configure shared folders on the virtual machine and may be used multiple times in a Vagrantfile.
This means you can define additional shared folders using:
config.vm.share_folder "name", "/path/on/vm", "path/on/host"
If I wanted another developer to use this environment, I'd like for them to have instant access to the database structure/setup etc without having to import any SQL files. Is this possible?
Yes, you can alter the data storage path of, say, MySQL to store it in on a share on the host so that
the data is not lost when the VM is destroyed.
However, this is not as simple as it sounds. If you're using the MySQL cookbook (again, assuming you're using MySQL), you have to modify it so that the shared folder is mounted with the uid and gid of the mysql user or otherwise the user can't write to it. You can mount a share manually like this:
mount -t vboxsf -o uid=`id -u mysql` -o gid=`id -g mysql` sharename /new/data/dir
Also, if you're using Ubuntu or Debian Wheezy, Apparmor needs to be configured differently for MySQL,
as it does not allow writes to the newly configured data directory. This can be done by writing
/new/data/dir r,
/new/data/dir/** rwk,
to /etc/apparmor/apparmor.d/local/usr.sbin.mysqld. This version of the mysql cookbook supports this behaviour already, so you can look up how it does that.

.Net Remoting server dll in same directory as the executable

I have an application that hosts a remote object. For the client application to access this remote object; the dll with the remote server implemented, should be in the same directory where the executable of the server application resides. When I install this application the dll resides in a different directory and I manually paste the dll to the same directory where the server executable resides.
I do not want to do this every time. Is there a way to get around this problem? That is the application refer the dll from where it is available rather than demanding the dll to be in the same directory where the executable resides.
See http://www.informit.com/articles/article.aspx?p=30601&seqNum=6

Resources