The installation of Workload Scheduler agent component fails during start up - installation

The installation of Workload Scheduler agent component fails during the step Start up IBM Workload Scheduler showing the following message:
tebctl-tws_cpa_agent_agt94 agent not installed properly

I suggest you to check that the installation directory has the proper permissions set.
If you are installing in the /opt/IBM directory and its permissions are set to 750, change the permissions to 755.

This problem can arise for one of the following three reasons:
1) your tws user "agt94" is not able to read the file "ita.ini" located in the "TWS_HOME/ITA/cpa/ita" folder;
2) your tws user "agt94" has not the needed execute permission on file "agent.sh" located in the "TWS_HOME/ITA/cpa/ita" folder;
3) your tws user "agt94" has not the needed execute permission on file "agent" located in the "TWS_HOME/ITA/cpa/ita" folder.

Related

Unable to save output from Rscripts in system directory using Devops Pipeline

I am running Rscripts on a self hosted Devops agent. My Windows agent is able to access the system's directory where its hosted. Below is the directory structure for my code
Agent loc. : F:/agent
Source Code : F:/agent/deployment/projects/project1/sourcecode
DWH _dump : F:/agent/deployment/DWH_dump/2021/
Output loca. : F:/agent/deployment/projects/project1/output_data/2021
The agent is using CMD in the devops pipeline to trigger R from the system and use the libraries from the system directory.
Problem statement: I am unable to save the output from my Rscript in to the Output Loca. directory. It give an error as Probable reason: permission denied error by pointing to that directory.
Output File Format: file_name.rds but same issue happens even for a csv file.
Command leading to failure: saverds(paste0(Output loca.,"/",file_name.rds))
Workaround : However I found a workaround, tried to save the scripts at the Source Code directory and then save the same files at the Output Loca. directory. This works perfectly fine but costs me 2 extra hours of run time because I have to save all intermediatory files and delete them in the end. Keeping the intermediatory files in memory eats up my RAM.
I have not opened that directory anywhere in the machine. Only open application in my explorer is my browser where the pipeline is running. I spent hours to figure out the reason but no success. Even I checked the system Path to see whether I have mentioned that directory over there and its not present.
When I run the same script directly, on the machine using Rstudio, I do not have any issues with saving the file at any directory.
Spent 2 full days already. Any pointers to figure out the root cause can save me few hours of runtime.
Solution was to set the Azure Pipeline Agent services in Windows to run with Admin Credentials. The agent was not configured as an admin during creation and so after enabling it with my userid which has admin access on the VM, the pipelines were able to save files without any troubles.
Feels great, saved few hours of run time!
I was able to achieve this by following this post.

run.as option does not work other than Nifi user

I want to run my NiFi application using ec2-user rather than default nifi user. I changed run.as=ec2-user in bootstrap.conf but it did not worked .It is not allowing me to start Nifi application getting following error while staring nifi service.
./nifi.sh start
nifi.sh: JAVA_HOME not set; results may vary
Java home:
NiFi home: /opt/nifi/current
Bootstrap Config File: /opt/nifi/current/conf/bootstrap.conf
User Runnug Nifi Application : sudo -u ec2-user
Error: Could not find or load main class org.apache.nifi.bootstrap.RunNiFi
Any pointer to this issue?
This is most likely a file permission problem, which is not covered by installing the service with nifi.sh install. A summary of the required permissions includes:
Read access to the entire distribution in the NIFI_HOME directory
Write access to the NIFI_HOME directory itself - NiFi will create a number of directories and files at runtime including logs, work, state, and various repositories.
Write access to the bin directory
Write access to the conf directory
Write access to the lib directory, and to all of the files in the lib directory
It is certainly possible to narrow the permissions by creating the working directories manually, and by adjusting NiFi's settings to rearrange the directory layout. But the permissions above should get you started.

izpack without run-privileged cannot write to C:\MyDirName

I have a custom Java app and an IzPack installer. For years, in my izpack build file I had the following:
<run-privileged condition="izpack.windowsinstall.vista|izpack.windowsinstall.7"/>
The problem is that some of the users do not have admin privilege on their PCs, but they still want to be able to install the package. If I remove the above, they can run the installer but then it complains "This directory cannot be written!", when they try to install in the default location, which is C:\OPENDCS.
Yet the same user can create this directory either from a CMD or an Explorer window.
Is there a way to allow the izpack installer to create a directory directly under C:\ without running as an administrator?
Please check the behavior with izpack v5.0.7. The problem you mentioned should be fixed with this issue: https://izpack.atlassian.net/browse/IZPACK-1355
You could package your directory create operations in a create-dirs.bat batch file, which you would mark <executable> and execute stage="postinstall". This way the directory creation will be executed with the given user's permissions, which (according to your post) should work just fine.
EDIT 29/02/2016: You would put this file into a first "dummy" <pack>, mark it <executable> and execute stage="postinstall" as stated above, which would execute it after this first dummy pack was installed. At the installation of the next pack (i.e. your first useful pack) you will already have the folder created.
Note that postinstall will not run the batch file after the installation, but after the <pack>'s installation.

Hudson post build step security issue

Hudson jobs can be configured to have a post build step which can execute shell commands as an option, accidently or intentionally someone can wipe out the hudson home directory
just by running rm command is there a specific set of permission of home directory
which will prevent such scenario
On Linux, you will likely be running the Hudson process as the "hudson" user. Using a combination of chown and chmod, you can set the permissions on the hudson application server directory such that the hudson user only has read-access of the Hudson application server directory.
Hudson stores all of its file storage in /var/lib/jenkins by default (if you're using the .deb package).
so basically, make sure that the hudson user has recursive write access of that directory, allow hudson read-only access of the other Hudson installation files, and no access over any other file.

Jenkins calling batch file on mapped drive

I have a Jenkins job that calls a batch file on a ClearCase drive (V:).
My Jenkins slave agent is running as a service using a local admin account.
The Jenkins job does the follow:
cleartool startview MY_VIEW
cd /d "V:\MY_VIEW\Build"
call PrepareBuild.bat
When I run the Jenkins job, I keep getting "Access is denied." in the Console Output when it tries to call the batch file. However if I manually run the above in command prompt, it completes successfully.
I did not have this problem under Windows XP. Does anybody know why this is happening on Windows 7 (32-bit)?
Thanks.
The V:\ is a virtual drive obtained with the windows command subst.
It is a shortcut between the root directory of your dynamic view (M:\yourView) and the virtual drive.
(Ie, V:\ is not particularly linked to ClearCase. It is just a drive letter the user wishes to associate to a certain ClearCase view root directory)
However, ClearCase registers that association in the registry HKCU/software/atria/....
Which means the ClearCase session run under the local admin account for Jenkins won't know about said association and the need to restore that virtual drive.
A workaround would be to make that drive permanent, using psubst.
That register the drive path in [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices], and HKLM is accessible from all accounts.
See " How to make SUBST mapping persistent across reboots? "
I had the same problem. Had a simpler solution.
Jenkins doesn't have access to folders that only the user has access to (even though its run by the user). So the folder which is getting access denied you need to set folder permission to everyone not the user

Resources