AMBER16: running in parallel through job submission not working - bash

I am trying to run AMBER16 on a cluster but it is not working when the job is submitted through the scheduler, using the "qsub" command. However, the job does work when running locally on the front node. I have all of the PATHS set correctly in the .bashrc file. The following is my code:
#!/bin/bash
#PBS -N testAmber
#PBS -l nodes=1:ppn=12
#PBS -l walltime=05:00:00
cd working_directory
export AMBERHOME=/state/partition1/apps/amber16
source $AMBERHOME/amber.sh
mpirun -np 12 $AMBERHOME/bin/sander.MPI -O -i ...etc...
When this is submitted, I get the following error messages:
.../.bashrc: line 46: /state/partition1/apps/amber16/amber.sh: No such file or directory
/var/spool/torque/mom_priv/jobs/...: line 16: /state/partition1/apps/amber16/amber.sh: No such file or directory
mpirun was unable to launch the specified application as it could not access
or execute an executable:
Executable: /state/partition1/apps/amber16/bin/sander.MPI
Node: compute-0-8.local
while attempting to start process rank 0.
I've been trying to find a solution for hours, but am stuck. Please help :(

Related

Script to copy data from cluster local to pod is neither working nor giving any error

The bash script I'm trying to run on the K8S cluster node from a proxy server is as below:
#!/usr/bin/bash
cd /home/ec2-user/PVs/clear-nginx-deployment
for file in $(ls)
do
kubectl -n migration cp $file clear-nginx-deployment-d6f5bc55c-sc92s:/var/www/html
done
This script is not copying data which is therein path /home/ec2-user/PVs/clear-nginx-deployment of the master node.
But it works fine when I try the same script manually on the destination cluster.
I am using python's paramiko.SSHClient() for executing the script remotely:
def ssh_connect(ip, user, password, command, port):
try:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=user, password=password, port=port)
stdin, stdout, stderr = client.exec_command(command)
lines = stdout.readlines()
for line in lines:
print(line)
except Exception as error:
filename = os.path.basename(__file__)
error_handler.print_exception_message(error, filename)
return
To make sure the above function is working fine, I tried another script:
#!/usr/bin/bash
cd /home/ec2-user/PVs/clear-nginx-deployment
mkdir kk
This one runs fine with the same python function, and creates the directory 'kk' in desired path.
If you could please suggest the reason behind, or suggest an alternative to carry out this.
Thank you in advance.
The issue is now solved.
Actually, the issue was related to permissions which I got to know later. So what I did to resolve is, first scp the script to remote machine with:
scp script.sh user#ip:/path/on/remote
And then run the following command from the local machine to run the script remotely:
sshpass -p "passowrd" ssh user#ip "cd /path/on/remote ; sudo su -c './script.sh'"
And as I mentioned in question, I am using python for this.
I used the system function in os module of python to run the above commands on my local to both:
scp the script to remote:
import os
command = "scp script.sh user#ip:/path/on/remote"
os.system(command)
scp the script to remote:
import os
command = "sshpass -p \"passowrd\" ssh user#ip \"cd /path/on/remote ; sudo su -c './script.sh'\""
os.system(command)

How to execute a mq script file in a Kubernetes Pod?

I have a file .mqsc with a commands for create queues(ibm mq).
How to run a script by kubectl?
kubectl exec -n test -it mq-0 -- /bin/bash -f create_queues.mqsc doesn't work.
log:
/bin/bash: create_queues.mqsc: No such file or directory command terminated with exit code 127
Most probably your script is not under the "/" directory in docker. You need to find whole path after that you need to execute script

jmeter.log file is not created in non GUI mode

Im running my jmeter script using a SH file having below commands,
#! /bin/sh
JMETER_HOME=/jmeter/DummyTest/Jmeter4/apache-jmeter-4.0
#PATH=$PATH:JMETER_HOME/bin
#export PATH
echo $PATH
cd $1
echo current dir is `pwd`
echo "=== START OF run-load-atcom_scripts.sh SCRIPT ==="
$JMETER_HOME/bin/jmeter.sh -n -t /jmeter/DummyTest/TrialScript1.jmx
while running im getting below error
Uncaught Exception java.lang.IllegalStateException: Failed calling setupTest. See log file for details.
In this case im not getting jmeter.log file creating or updating in jmeter bin folder.
Can anyone help me in this.
Got to fix this ,i have given the command -j /path/to/logfile/logfile.log
which works.
Apart from this anyways if anyone comes up with any solution(rather using -j ,automatically in bin folder ) would be helpful for everyone

Cygwin run multiple commands at once

MATLAB runs on a host machine. By using the 'system' call and CYGWIN I have to run some applications on a remote system based on linux.
The problem is, after calling the SSH command the other commands are ignored
so
system('C:\cygwin64\bin\bash -l -c "ssh -t -t 10.0.0.127; cd /home/superuser/MAGIC_PATH")
does not work
So I tried to change the directory sequentially after the SSH-Connections, but now the MATLAB-script is blocked. And I have to type the command manually. Which is not the desired solution
In MATLAB:
cygwin_path='C:\cygwin64\bin\bash';
binary_path='/home/superuser/MAGIC_PATH';
SSH_string=sprintf('%s -l -c "ssh -t -t %s &"',cygwin_path,remote_IP)
ChangeDIR_string=sprintf('%s -l -c "cd /home/superuser/"',cygwin_path)
So how can I change my code respectively the system call, so that it automatically runs multiple commands and starts some applications (as background jobs)

cannot chdir to /path/to/job_submit_dir/ in SGE cluster

I use qsub to submit a job to the SGE cluster. In the job file, the following are defined:
#!/bin/bash
#
#$ -V
#$ -cwd
#$ -j y
#$ -S /bin/bash
#
The -cwd indicates that the job will run in the directory where the job file is. All job files contains the job settings above.
Some of the jobs are submitted and could run correctly, but some of them are submitted and the status from qstat is Eqw, and when use qstat -j job_id to show the detail status, it shows:
failed changing into working directory because:
error: can't chdir to /path/to/job_submit_dir
But sometimes I go into the directory, and resubmit the job, it seems to work.
I've searched in Google, and this site has provided a solution, but it doesn't work for my setting.
Could anyone give some advice, please?
Appears that for this instance of this error issues may be due to excessive write to network mounted storage:
https://www.icts.uiowa.edu/confluence/display/ICTSit/Best+practices+for+high+throughput+jobs
To solve attempt to redirect output to local storage on each execution node or /dev/null.

Resources