Julia parallel computing over multiple nodes in cluster - parallel-processing

I am running some jobs on a shared cluster and I've been trying to use more than 1 node at a time. While using julia -p #processors works for the cores on one node, it doesn't find the other nodes.
The cluster is using SGE and I tried a lot of different ways to make the nodes work, but only one was working. Is there an easy way built in Julia to launch Julia with julia -mpi 32 or something similar?
Using
using ClusterManagers
println(nworkers(),nprocs(),Sys.CPU_CORES)
ClusterManagers.addprocs_sge(16)
ClusterManagers.addprocs_sge(15)
println(nworkers(),nprocs(),Sys.CPU_CORES)
doesn't work (I have submitted a job reserving 2 nodes with 16 cores each on the SGE), the output file of the job is empty and instead I get 16 different output files julia-70755.o8252776.* (* = 1...16) with the following text:
julia_worker:9009#192.168.17.206
Master process (id 1) could not connect within 60.0 seconds.
exiting.
Starting Julia with julia --machinefile $PE_HOSTFILE also failed with:
Warning: Permanently added the RSA host key for IP address '192.168.18.10' to th
e list of known hosts.
ERROR: connect: invalid argument (EINVAL)
in uv_error at ./libuv.jl:68 [inlined]
in connect!(::TCPSocket, ::IPv4, ::UInt16) at ./socket.jl:652
in connect!(::TCPSocket, ::SubString{String}, ::UInt16) at ./socket.jl:688
in connect at ./stream.jl:959 [inlined]
in connect_to_worker(::SubString{String}, ::Int16) at ./managers.jl:483
in connect(::Base.SSHManager, ::Int64, ::WorkerConfig) at ./managers.jl:425
in create_worker(::Base.SSHManager, ::WorkerConfig) at ./multi.jl:1786
in setup_launched_worker(::Base.SSHManager, ::WorkerConfig, ::Array{Int64,1}) a
t ./multi.jl:1733
in (::Base.##669#673{Base.SSHManager,Array{Int64,1}})() at ./task.jl:360
in sync_end() at ./task.jl:311
in macro expansion at ./task.jl:327 [inlined]
in #addprocs_locked#665(::Array{Any,1}, ::Function, ::Base.SSHManager) at ./mul
ti.jl:1688
in (::Base.#kw##addprocs_locked)(::Array{Any,1}, ::Base.#addprocs_locked, ::Bas
e.SSHManager) at ./<missing>:0
in #addprocs#664(::Array{Any,1}, ::Function, ::Base.SSHManager) at ./multi.jl:1
658
in (::Base.#kw##addprocs)(::Array{Any,1}, ::Base.#addprocs, ::Base.SSHManager)
at ./<missing>:0
in #addprocs#764(::Bool, ::Cmd, ::Int64, ::Array{Any,1}, ::Function, ::Array{An
y,1}) at ./managers.jl:112
in process_options(::Base.JLOptions) at ./client.jl:227
in _start() at ./client.jl:321
UndefRefError()
I was suggested to use the MPI.jl package, but it doesn't look to me like it really supports the julia parallel syntax, at the way I'm using it by just writing #sync #parallel before a for loop that I want to run in parallel (i.e. Metropolis-Montecarlo).

The IT team got back to me and told me that the SGE does not allow passwordless ssh, that's why addprocs_sge() wouldn't work. However they now added a file for the job that I can pass to Julia and told me to run the job with this script:
qlogin -pe mpi_28_tasks_per_node 56
module load julia/0.5.1
julia --machinefile $TMPDIR/machines
The machines file looks like this:
::::::::::::::
/scratch/8548498.1.u/machines
::::::::::::::
{hostname1}
{hostname1}
...
{hostname2}
{hostname2}

You might want to read the julia docs on parallel computing where there is a section on cluster managers. Also, take a look at ClusterManagers.jl where SGE is supported:
julia> using ClusterManagers
julia> ClusterMangers.addprocs_sge(5)

Related

Memory usage increase when some data is copied from other images (processors) in coarrays fortran?

I am using coarray to parallelize a fortran code. The code is working properly in my pc (ubuntu 18, OpenCoarrays 2.0.0). However when I run the code on the cluster (centos) it crashes with the following error:
=====================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= EXIT CODE: 9
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
=====================================================================================
APPLICATION TERMINATED WITH THE EXIT STRING: Killed (signal 9)
Error: Command:
/APP/enhpc/mpi/mpich2-gcc-hd/bin/mpiexec -n 10 -machinefile machines ./IPS
failed to run
using top command during running the code I found out that memory increases when the code is running. The problem is coming from where I copy some data from another processor:
for example a(:)=b(:)[k]
Since the code is running on my pc properly what can be the reason for memory increase in cluster?
I have to mention that I am running the code with cores on a single node.
It increases continuously. It is a centos cluster. I do not know what kind of architecture it has. I am using OpenCoarrays v2.9.1 which is using coarray fortran (CAF) for compiling. Also GNU v 10.1. I wrote a simple code as follows:
program hello_image
integer::m,n,i
integer,allocatable:: A(:)[:],B(:)
m=1e3
n=1e6
allocate(A(n)[*],B(n))
A(:)=10
B(:)=20
write(*,*) j,this_image()
do j=1,m
Do i=1,n
B(i)=A(i)[3] ! this line means that the data is copied from processor 3 to other processors
enddo
write(*,*) j,this_image()
enddo
end program hello_image
When I am running this code in my pc the memory usage for all clusters are a constant value of 0.1% and they are not increasing. However, when I run the same code in the cluster the memory usage is continously increasing.
Output from My pc:
output from cluster:

Problems with Orca and OpenMPI for parallel jobs

Hello to the community:
I recently started to use ORCA software for some quantum calculation but I have been having a lot of problems to lunch a parallel calculation in the cluster of my University.
To install Orca I used the static version:
orca_4_2_1_linux_x86-64_openmpi314.tar.xz.
In a shared direction of the cluster (/data/shared/opt/ORCA/).
And putted in my ~/.bash_profile:
export PATH="/data/shared/opt/ORCA/orca_4_2_1_linux_x86-64_openmpi314:$PATH"
export LD_LIBRARY_PATH="/data/shared/opt/ORCA/orca_4_2_1_linux_x86-64_openmpi314:$LD_LIBRARY_PATH"
For the installation of the corresponding OpenMPI version (3.1.4)
tar -xvf openmpi-3.1.4.tar.gz
cd openmpi-3.1.4
./configure --prefix="/data/shared/opt/ORCA/openmpi314/"
make -j 10
make install
When I use the frontend server all is wonderful:
With a .sh like this:
#! /bin/bash
export PATH="/data/shared/opt/ORCA/openmpi314/bin:$PATH"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/data/shared/opt/ORCA/openmpi314/lib"
$(which orca) test.inp > test.out
and an input like this:
# Computation of myjob at b3lyp/6-31+G(d,p)
%pal nprocs 10 end
%maxcore 8192
! RKS B3LYP 6-31+G(d,p)
! TightSCF Grid5 NoFinalGrid
! Opt
! Freq
%cpcm
smd true
SMDsolvent "water"
end
* xyz 0 1
C 0 0 0
O 0 0 1.5
*
The problem appears when I use the nodes:
.inp file:
#! Computation at RKS B3LYP/6-31+G(d,p) for cis1_bh267_m_Cell_152
%pal nprocs 12 end
%maxcore 8192
! RKS B3LYP 6-31+G(d,p)
! TightSCF Grid5 NoFinalGrid
! Opt
! Freq
%cpcm
smd true
SMDsolvent "water"
end
* xyz 0 1
C -4.38728130 0.21799058 0.17853303
C -3.02072869 0.82609890 -0.29733316
F -2.96869122 2.10937041 0.07179384
F -3.01136328 0.87651596 -1.63230798
C -1.82118365 0.05327804 0.23420220
O -2.26240947 -0.92805650 1.01540713
C -0.53557484 0.33394113 -0.05236121
C 0.54692198 -0.46942807 0.50027196
O 0.31128292 -1.43114232 1.22440290
C 1.93990391 -0.12927675 0.16510948
C 2.87355011 -1.15536140 -0.00858832
C 4.18738231 -0.82592189 -0.32880964
C 4.53045856 0.52514329 -0.45102225
N 3.63662927 1.52101319 -0.26705841
C 2.36381718 1.20228695 0.03146190
F -4.51788749 0.24084604 1.49796862
F -4.53935644 -1.04617745 -0.19111502
F -5.43718443 0.87033190 -0.30564680
H -1.46980819 -1.48461498 1.39034280
H -0.26291843 1.15748249 -0.71875720
H 2.57132559 -2.20300864 0.10283592
H 4.93858460 -1.60267627 -0.48060140
H 5.55483009 0.83859415 -0.70271364
H 1.67507560 2.05019549 0.17738396
*
.sh file (Slurm job):
#!/bin/bash
#SBATCH -p deflt #which partition I want
#SBATCH -o cis1_bh267_m_Cell_152_myjob.out #path for the slurm output
#SBATCH -e cis1_bh267_m_Cell_152_myjob.err #path for the slurm error output
#SBATCH -c 12 #number of cpu(logical cores)/task (task is normally an MPI process, default is one and the option to change it is -n)
#SBATCH -t 2-00:00 #how many time I want the resources (this impacts the job priority as well)
#SBATCH --job-name=cis1_bh267_m_Cell_152 #(to recognize your jobs when checking them with "squeue -u USERID")
#SBATCH -N 1 #number of node, usually 1 when no parallelization over nodes
#SBATCH --nice=0 #lowering your priority if >0
#SBATCH --gpus=0 #number of gpu you want
# This block is echoing some SLURM variables
echo "Jobid = $SLURM_JOBID"
echo "Host = $SLURM_JOB_NODELIST"
echo "Jobname = $SLURM_JOB_NAME"
echo "Subcwd = $SLURM_SUBMIT_DIR"
echo "SLURM_CPUS_PER_TASK = $SLURM_CPUS_PER_TASK"
# This block is for the execution of the program
export PATH="/data/shared/opt/ORCA/openmpi314/bin:$PATH"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/data/shared/opt/ORCA/openmpi314/lib"
$(which orca) ${SLURM_JOB_NAME}.inp > ${SLURM_JOB_NAME}.log --use-hwthread-cpus
I used the --use-hwthread-cpus flag as a recommendation but the same problem appears with and without this flag.
All the error is:
There are not enough slots available in the system to satisfy the 12 slots that were requested by the application: /data/shared/opt/ORCA/orca_4_2_1_linux_x86-64_openmpi314/orca_gtoint_mpi
Either request fewer slots for your application, or make more slots available for use. A "slot" is the Open MPI term for an allocatable unit where we can launch a process. The number of slots available are defined by the environment in which Open MPI processes are run:
1. Hostfile, via "slots=N" clauses (N defaults to number of processor cores if not provided)
2. The --host command line parameter, via a ":N" suffix on the hostname (N defaults to 1 if not provided)
3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
4. If none of a hostfile, the --host command line parameter, or an RM is present, Open MPI defaults to the number of processor cores In all the above cases, if you want Open MPI to default to the number
of hardware threads instead of the number of processor cores, use the --use-hwthread-cpus option.
Alternatively, you can use the --oversubscribe option to ignore the number of available slots when deciding the number of processes to launch.
*[file orca_tools/qcmsg.cpp, line 458]:
.... aborting the run*
When I go to the output of the calculation, it looks like start to run but when launch the parallel jobs fail and give:
ORCA finished by error termination in GTOInt
Calling Command: mpirun -np 12 --use-hwthread-cpus /data/shared/opt/ORCA/orca_4_2_1_linux_x86-64_openmpi314/orca_gtoint_mpi cis1_bh267_m_Cell_448.int.tmp cis1_bh267_m_Cell_448
[file orca_tools/qcmsg.cpp, line 458]:
.... aborting the run
We have two kind of nodes on the cluster:
A punch of them are:
Xeon 6-core E-2136 # 3.30GHz (12 logical cores) and Nvidia GTX 1070Ti
And the other ones:
AMD Epyc 24-core (24 logical cores) and 4x Nvidia RTX 2080Ti
Using the command scontrol show node the details of one node of each group are:
First Group:
NodeName=fang1 Arch=x86_64 CoresPerSocket=6
CPUAlloc=12 CPUTot=12 CPULoad=12.00
AvailableFeatures=(null)
ActiveFeatures=(null)
Gres=gpu:gtx1070ti:1
NodeAddr=fang1 NodeHostName=fang1 Version=19.05.5
OS=Linux 5.7.12-arch1-1 #1 SMP PREEMPT Fri, 31 Jul 2020 17:38:22 +0000
RealMemory=15923 AllocMem=0 FreeMem=171 Sockets=1 Boards=1
State=ALLOCATED ThreadsPerCore=2 TmpDisk=7961 Weight=1 Owner=N/A MCS_label=N/A
Partitions=deflt,debug,long
BootTime=2020-10-27T09:56:18 SlurmdStartTime=2020-10-27T15:33:51
CfgTRES=cpu=12,mem=15923M,billing=12,gres/gpu=1,gres/gpu:gtx1070ti=1
AllocTRES=cpu=12,gres/gpu=1,gres/gpu:gtx1070ti=1
CapWatts=n/a
CurrentWatts=0 AveWatts=0
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
Second Group
NodeName=fang50 Arch=x86_64 CoresPerSocket=24
CPUAlloc=48 CPUTot=48 CPULoad=48.00
AvailableFeatures=(null)
ActiveFeatures=(null)
Gres=gpu:rtx2080ti:4
NodeAddr=fang50 NodeHostName=fang50 Version=19.05.5
OS=Linux 5.7.12-arch1-1 #1 SMP PREEMPT Fri, 31 Jul 2020 17:38:22 +0000
RealMemory=64245 AllocMem=0 FreeMem=807 Sockets=1 Boards=1
State=ALLOCATED ThreadsPerCore=2 TmpDisk=32122 Weight=1 Owner=N/A MCS_label=N/A
Partitions=deflt,long
BootTime=2020-12-15T10:09:43 SlurmdStartTime=2020-12-15T10:14:17
CfgTRES=cpu=48,mem=64245M,billing=48,gres/gpu=4,gres/gpu:rtx2080ti=4
AllocTRES=cpu=48,gres/gpu=4,gres/gpu:rtx2080ti=4
CapWatts=n/a
CurrentWatts=0 AveWatts=0
ExtSensorsJoules=n/s ExtSensorsWatts=0 ExtSensorsTemp=n/s
I use in the script of Slurm the flag -c, --cpus-per-task = integer; and in the input for Orca the command %pal nprocs integer end. I tested different combinations of this two parameters in order to see if I am using more CPU than the available:
-c, --cpus-per-task = integer
%pal nprocs integer end
None
6
None
3
None
2
1
2
1
12
2
6
3
4
12
12
With different amount of memories: 8000 MBi and 2000 MBi (my total memory is around 15 GBi). And in all the cases the same error appears. I am not an expert user neither in ORCA non in informatic (but maybe you guess this for the extension of the question), so maybe the solution is simple but I really don’t have it, Idon't know what's going on!
A lot of thanks in advance,
Alejandro.
Faced the same issue.
Explicit declaration --prefix ${OMPI_HOME} directly as ORCA parameter and using of static linked ORCA version helps me:
export RSH_COMMAND="/usr/bin/ssh"
export PARAMS="--mca routed direct --oversubscribe -machinefile ${HOSTS_FILE} --prefix ${OMPI_HOME}"
$ORCA_DIR/orca $WORKDIR/$JOBFILE.inp "$PARAMS" > $WORKDIR/$JOBFILE.out
Also, It's better to build OpenMPI 3.1.x with --disable-builtin-atomics flag.
Thank you #Alexey for your answer. And sorry for the wrong Tag, like I said, I am pretty rookie on this stuff.
The problem was not in the Orca or OpenMPI configuration but in the bash script used for scheduled the Slurm job.
I thought that the entire Orca job itself was what Slurm call a "task". For that reason I declared the flag --cpus-per-task equal to the number of parallel jobs that I want to do with Orca. But the problem is that each parallel Orca job (that is launch using OpenMPI) is a task for Slurm. Therefore with my Slurm script I was reserving a node with at least 12 CPU, but when Orca launch their parallel jobs, each one ask for 12 CPU, so: "There are not enough slots available ..." because I needed 144 CPU.
The rest of the cases in the table of my Question fails for another reason. I was launching at the same time 5 different Orca calculation. Now, because --cpus-per-task could be None, 1, 2 or 3; the five calculation might enter in the same node or in another node with this amount of free CPU, but when Orca ask for the parallel jobs, fail again because there are not this amount of CPU on the node.
The solution that I found is pretty simple. On the .sh script for Slurm I putted this:
#SBATCH --mincpus=n*m
#SBATCH --ntasks=n
#SBATCH --cpus-per-task m
Instead of only:
#SBATCH --cpus-per-task m
Where n will be equal to the number of parallel jobs specified on the Orca input (%pal nprocs n end) and m the number of CPU that you want to use for each parallel Orca job.
In my case I used n = 12, m = 1. With the flag --mincpus I ensured to take a node with at least 12 CPU and allocated them. With the --cpus-per-task is pretty evident what this flag do (even for me :-) ), which, by the way, has a default value of 1 and I don't know if more than 1 CPU for each OpenMPI Orca job improve the velocity of the calculation. And --ntasks gives the information to Slurm of how many task you will do.
Of course if you know the number of task and the CPU per task is easy to know how many CPU you need to reserve, but I don't know if this is easy to Slurm too :-). So, to be sure that I allocate the correct number of CPU i used --mincpus flag, but maybe is not needed. The thing is that it works now ^_^.
It is also important to take into account the amount of memory that you declare in the input of Orca in order of do not exceed the available memory. For example, if you have 12 task and a RAM of 15000 MBi, the right amount of memory to declared should be no more than 15000/12 = 1250 MBi
I had a similar problem with parallel jobs before. The slurm also output not enough slots error.
My solution is to change parallel threads into parallel processes. For my system is to change
#SBATCH -c 24
into
#SBATCH -n 24
and everything works just fine.

Julia program stalls when run from crontab scheduler (Linux)

I have a really specific and tricky bug that I can't figure out how to fix/work around and I can't find a similar case on here.
I have a bash script that invokes a Julia script partway through to generate animation frames, then calls ffmpeg to render the animation. When I run from the terminal everything works great. I wanted to automate the process so I got a fun random simulation once a day, so I added it to my crontab and it runs--but only to a certain point. The animation always stops at a specific frame, then the rest of the script continues and spits out the chopped off animation.
I thought maybe cron was the problem, so I installed jobber and ran the job from there--with jobber the script just stalls at the Julia part. From the resource manager I can see the Julia process still using memory (although well beneath the limit) but it's just gone to sleep.
Another strange thing that I have noticed is that when I invoke the script manually from the command line it runs ~2-4x faster in generating the animation frames than when its running automatically via crontab/jobber.
Is this a weird resource issue? To get the longer animations to render initially I had to modify my ulimit settings, but I changed the config file so they should be set higher for everything? How can I debug this further and/or rectify it?
If you want to see an example of the code being run (both the shell script and julia script being invoked) it's pretty much up to date on my github here. In the threeBodyProb.jl file the I'm pretty sure the hang up is with the frame function in the for looop at the end of the file.
I am running Linux Mint 19.1 Cinnamon. Thanks in advance for the help!
Here is the part of the bash script where it hangs up:
./threeBodyProb.jl
echo animation generated, running ffmpeg >> /home/kirk/Documents/3Body/cron_log.txt
cd tmpPlots
</dev/null ffmpeg -framerate 30 -i "%06d.png" -c:v libx264 -preset slow -coder 1 -movflags +faststart -g 15 -crf 18 -pix_fmt yuv420p -profile:v high -y -bf 2 -fs 15M -vf "scale=720:720,setdar=1/1" "/home/kirk/Documents/3Body/3Body_fps30.mp4"
And here is the for loop that hangs up in Julia:
plotLoadPath="/home/kirk/Documents/3Body/tmpPlots/"
threeBodyAnim=Animation(plotLoadPath,String[])
for i=1:35:length(t)
gr(legendfontcolor = plot_color(:white)) #legendfontcolor=:white plot arg broken right now (at least in this backend)
print("$(#sprintf("%.2f",i/length(t)*100)) % complete\r") #output percent tracker
pos=[plotData[1][i],plotData[2][i],plotData[3][i],plotData[4][i],plotData[5][i],plotData[6][i]] #current pos
limx,limy=getLims(pos./1.5e11,10) #convert to AU, 10 AU padding
p=plot(plotData[1][1:i]./1.5e11,plotData[2][1:i]./1.5e11,label="",linecolor=colors[1]) #plot orbits up to i
p=plot!(plotData[3][1:i]./1.5e11,plotData[4][1:i]./1.5e11,label="",linecolor=colors[2])
p=plot!(plotData[5][1:i]./1.5e11,plotData[6][1:i]./1.5e11,label="",linecolor=colors[3])
p=scatter!(starsX,starsY,markercolor=:white,markersize=:1,label="") #fake background stars
star1=makeCircleVals(rad[1],[plotData[1][i],plotData[2][i]]) #generate circles with appropriate sizes for each star
star2=makeCircleVals(rad[2],[plotData[3][i],plotData[4][i]]) #at current positions
star3=makeCircleVals(rad[3],[plotData[5][i],plotData[6][i]])
p=plot!(star1[1]./1.5e11,star1[2]./1.5e11,label="$(#sprintf("%.1f", m[1]./2e30))",color=colors[1],fill=true) #plot star circles with labels
p=plot!(star2[1]./1.5e11,star2[2]./1.5e11,label="$(#sprintf("%.1f", m[2]./2e30))",color=colors[2],fill=true)
p=plot!(star3[1]./1.5e11,star3[2]./1.5e11,label="$(#sprintf("%.1f", m[3]./2e30))",color=colors[3],fill=true)
p=plot!(background_color=:black,background_color_legend=:transparent,foreground_color_legend=:transparent,
background_color_outside=:white,aspect_ratio=:equal,legendtitlefontcolor=:white) #formatting for plot frame
p=plot!(xlabel="x: AU",ylabel="y: AU",title="Random Three Body Problem\nt: $(#sprintf("%0.2f",t[i]/365/24/3600)) yrs after start",
legend=:best,xaxis=("x: AU",(limx[1],limx[2]),font(9,"Courier")),yaxis=("y: AU",(limy[1],limy[2]),font(9,"Courier")),
grid=false,titlefont=font(14,"Courier"),size=(720,721),legendfontsize=8,legendtitle="Mass (in solar masses)",legendtitlefontsize=8) #add in axes/title/legend with formatting
frame(threeBodyAnim,p) #generate the frame
end
If it helps, when run from cron or jogger it always generates 407 frames and fails at the 408th.
UPDATE: Following #TasosPapastylianou's suggestion below I think the problem may be in differing Julia environments when run directly in the terminal vs from a background process like crontab or jobber. I've added the output when a test script that gets the Julia environment is run both from crontab/jobber and directly from the command line. I’m not sure if this is the problem, and if it is how I should tell cron to work in this environment for this job (tried sourcing .bashrc and .profile in script but that had no effect on env output).
From cron:
SHLVL=1
HOME=/home/kirk
LOGNAME=kirk
_=/home/kirk/bashTest.sh
PATH=/opt/someApp/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
LANG=en_US.UTF-8
SHELL=/bin/bash
PWD=/home/kirk
OPENBLAS_MAIN_FREE=1
From jobber:
MAIL=/var/mail/kirk
USER=kirk
HOME=/home/kirk
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
LOGNAME=kirk
XDG_SESSION_ID=c4
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
XDG_RUNTIME_DIR=/run/user/1000
LANG=en_US.UTF-8
SHELL=/bin/sh
PWD=/home/kirk
XDG_DATA_DIRS=/home/kirk/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share
OPENBLAS_MAIN_FREE=1
And when run manually from command line:
GJS_DEBUG_TOPICS=JS ERROR;JS LOG
LESSOPEN=| /usr/bin/lesspipe %s
PERLBREW_VERSION=0.86
PGPLOT_DIR=/home/kirk/Documents/research/MESA/mesasdk/lib/pgplot
USER=kirk
LANGUAGE=en_US
XDG_SEAT=seat0
SSH_AGENT_PID=1786
XDG_SESSION_TYPE=x11
SHLVL=1
CONDA_SHLVL=0
HOME=/home/kirk
DESKTOP_SESSION=cinnamon
GTK_MODULES=gail:atk-bridge
XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0
PERLBREW_ROOT=/home/kirk/perl5/perlbrew
PERLBREW_MANPATH=/home/kirk/perl5/perlbrew/perls/perl-5.24.1/man
MESA_DIR=/home/kirk/Documents/research/MESA/mesa-r11701
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
CINNAMON_VERSION=4.0.10
COLORTERM=truecolor
_CE_M=
MANDATORY_PATH=/usr/share/gconf/cinnamon.mandatory.path
QT_QPA_PLATFORMTHEME=qt5ct
HEADAS=/home/kirk/Documents/research/HEASOFT/heasoft-6.26/x86_64-pc-linux-gnu-libc2.27
LOGNAME=kirk
_=./bashTest.sh
DEFAULTS_PATH=/usr/share/gconf/cinnamon.default.path
GIO_EXTRA_MODULES=/usr/lib/x86_64-linux-gnu/gio/modules/
GTK_OVERLAY_SCROLLING=1
XDG_SESSION_ID=c12
TERM=xterm-256color
MESASDK_VERSION=x86_64-linux-20190503
XMM_DIR=/home/kirk/Documents/research/XMM_Newton/xmmsas_20190531_1155
_CE_CONDA=
GNOME_DESKTOP_SESSION_ID=this-is-deprecated
PATH=/home/kirk/Documents/research/MESA/mesasdk/bin:/home/kirk/anaconda3/bin:/home/kirk/anaconda3/condabin:/home/kirk/perl5/perlbrew/bin:/home/kirk/perl5/perlbrew/perls/perl-5.24.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
GDM_LANG=en_US
PERLBREW_HOME=/home/kirk/.perlbrew
SESSION_MANAGER=local/kirk-Inspiron-7352:#/tmp/.ICE-unix/1709,unix/kirk-Inspiron-7352:/tmp/.ICE-unix/1709
GNOME_TERMINAL_SCREEN=/org/gnome/Terminal/screen/bc8ec572_68ae_4d79_88a2_3cb33f74d86c
XDG_RUNTIME_DIR=/run/user/1000
XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session0
DISPLAY=:0
VALGRIND_LIB=/home/kirk/Documents/research/MESA/mesasdk/lib/valgrind
LANG=en_US.UTF-8
XDG_CURRENT_DESKTOP=X-Cinnamon
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
PERLBREW_PATH=/home/kirk/perl5/perlbrew/bin:/home/kirk/perl5/perlbrew/perls/perl-5.24.1/bin
XDG_SESSION_DESKTOP=cinnamon
GNOME_TERMINAL_SERVICE=:1.62
XAUTHORITY=/home/kirk/.Xauthority
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
XDG_GREETER_DATA_DIR=/var/lib/lightdm-data/kirk
MESASDK_ROOT=/home/kirk/Documents/research/MESA/mesasdk
CONDA_PYTHON_EXE=/home/kirk/anaconda3/bin/python
SHELL=/bin/bash
QT_ACCESSIBILITY=1
GDMSESSION=cinnamon
LESSCLOSE=/usr/bin/lesspipe %s %s
QT_LOGGING_RULES=qt5ct.debug=false
PERLBREW_PERL=perl-5.24.1
GJS_DEBUG_OUTPUT=stderr
GPG_AGENT_INFO=/run/user/1000/gnupg/S.gpg-agent:0:1
XDG_VTNR=7
PWD=/home/kirk
CONDA_EXE=/home/kirk/anaconda3/bin/conda
XDG_DATA_DIRS=/usr/share/cinnamon:/usr/share/gnome:/home/kirk/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share
XDG_CONFIG_DIRS=/etc/xdg/xdg-cinnamon:/etc/xdg
OMP_NUM_THREADS=2
PERLBREW_SHELLRC_VERSION=0.82
VTE_VERSION=5202
MANPATH=/home/kirk/Documents/research/MESA/mesasdk/share/man:/home/kirk/perl5/perlbrew/perls/perl-5.24.1/man:/usr/local/man:/usr/local/share/man:/usr/share/man
OPENBLAS_MAIN_FREE=1
UPDATE 2: Again following #TasosPapastylianou's suggestion, after telling the Julia script to log any errors when run from crontab I get the following stacktrace when it attempts to generate frame 408:
ERROR: LoadError: SystemError: opening file "/tmp/juliaFCI2yw.png": No such file or directory
Stacktrace:
[1] #systemerror#43(::Nothing, ::Function, ::String, ::Bool) at ./error.jl:134
[2] systemerror at ./error.jl:134 [inlined]
[3] #open#309(::Nothing, ::Nothing, ::Nothing, ::Nothing, ::Nothing, ::Function, ::String) at ./iostream.jl:289
[4] open at ./iostream.jl:281 [inlined]
[5] #open#310(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::getfield(Base, Symbol("##274#275")){String}, ::String) at ./iostream.jl:373
[6] open at ./iostream.jl:373 [inlined]
[7] read at ./io.jl:297 [inlined]
[8] _show(::IOStream, ::MIME{Symbol("image/png")}, ::Plots.Plot{Plots.GRBackend}) at /home/kirk/.julia/packages/Plots/h3o4c/src/backends/gr.jl:1603
[9] show(::IOStream, ::MIME{Symbol("image/png")}, ::Plots.Plot{Plots.GRBackend}) at /home/kirk/.julia/packages/Plots/h3o4c/src/output.jl:198
[10] png(::Plots.Plot{Plots.GRBackend}, ::String) at /home/kirk/.julia/packages/Plots/h3o4c/src/output.jl:8
[11] frame(::Animation, ::Plots.Plot{Plots.GRBackend}) at /home/kirk/.julia/packages/Plots/h3o4c/src/animation.jl:20
[12] top-level scope at /home/kirk/Documents/3Body/threeBodyProb.jl:265
[13] include at ./boot.jl:326 [inlined]
[14] include_relative(::Module, ::String) at ./loading.jl:1038
[15] include(::Module, ::String) at ./sysimg.jl:29
[16] exec_options(::Base.JLOptions) at ./client.jl:267
[17] _start() at ./client.jl:436
I'm unsure how to diagnose this--does cron limit the number of files a process can create or something like that? In my bash script I have also manually added the following settings (just in case) but that still resulted in the stacktrace above:
ulimit -n 4096
ulimit -t unlimited
Thanks so much for the help #TasosPapastylianou--that error message eventually led me to this post which fixed my problem (and also significantly sped up the animation rendering process as a nice byproduct).
Ultimately it appears the problem was not with cron or the bash script, but instead with Julia's GR backend. I added the line
GR.inline("png")
To the top of the for loop generating the plots to explicitly tell it I was making png files and apparently that fixes everything--not really sure why this is needed and why it's only needed when running from crontab/jobber so if anyone has further insight I'd love to know, but I'm glad it works!
Thanks again to everyone for their help and insights--this tip should be helpful to anyone making animations in a similar way with Julia due to the dramatic improvement in performance that came from this one line!

Julia and Slurm setup

I have a master node and 3 compute nodes.
Julia on master node is on /apps and on /state/p1/apps.
I do not have julia as a slurm module.
How should I setup Julia installation so that I can invoke a Julia script through slurm using ClusterManager?
Currently I get an error
srun: error: node-0-2: tasks 0-2: Exited with exit code 2
Julia script:
using ClusterManagers
addprocs(SlurmManager(3), partition="slurm", t="00:5:00")
hosts = []
pids = []
for i in workers()
host, pid = fetch(#spawnat i (gethostname(), getpid()))
println(host)
push!(hosts, host)
push!(pids, pid)
end
# The Slurm resource allocation is released when all the workers have
# exited
for i in workers()
rmprocs(i)
end
UPDATE
I seem to have a slurm issue. Tried updating ClusterManagers as suggested by #user338207 and SlurmManager(3) instead of SlurmManager(2) as suggested by crstnbr.
srun -N 2 julia parallel2.jl
srun: error: node-0-2: task 2: Exited with exit code 1
srun: error: node-0-2: task 2: Exited with exit code 1
WARNING: dropping worker: file not created in 63 seconds
WARNING: dropping worker: file not created in 63 seconds
node-0-1 3 out of 3
node-0-1
WARNING: dropping worker: file not created in 63 seconds
ERROR: LoadError: connect: connection refused (ECONNREFUSED)
try_yieldto(::Base.##296#297{Task}, ::Task) at ./event.jl:189
wait() at ./event.jl:234
wait(::Condition) at ./event.jl:27
stream_wait(::TCPSocket, ::Condition, ::Vararg{Condition,N} where N) at ./stream.jl:42
wait_connected(::TCPSocket) at ./stream.jl:258
but srun -N 2 hostname works fine
This how you could setup julia on a linux cluster and run a parallel task via slurm.
Download generic linux binaries from julialang.org
Put them somewhere, for example into ~/bin/julia-v0.6 (you will have to create this folder).
Create a julia-environment file in the same folder with content
export PATH=$HOME/bin/julia-v0.6/bin:$PATH
export LD_LIBRARY_PATH=$HOME/bin/julia-v0.6/lib:$LD_LIBRARY_PATH
export CPATH=$HOME/bin/julia-v0.6/include:$CPATH
Now you can use sbatch myjobfile.sh to submit a job file like
#!/bin/bash -l
#SBATCH --nodes=2
#SBATCH --ntasks=4
#SBATCH --ntasks-per-node=2
#SBATCH --time=00:10:00
#SBATCH --output=myoutput.log
#SBATCH --job-name=my-julia-job
source $HOME/julia-v0.6/julia-environment
cd working/folder/of/your/choice
julia my_clustermanager_script.jl
(Note that one could also put a srun --ntasks=1 in front of the julia command, see this github issue.)
Of course, you can also start an interactive job by allocating resources with salloc.
UPDATE:
Running the job script above (via sbatch myjobfile.sh) with my_clustermanager_script.jl being (note SlurmManager(4) instead of SlurmManager(3))
using ClusterManagers
addprocs(SlurmManager(4), t="00:5:00")
hosts = []
pids = []
for i in workers()
host, pid = fetch(#spawnat i (gethostname(), getpid()))
println(host)
push!(hosts, host)
push!(pids, pid)
end
# The Slurm resource allocation is released when all the workers have
# exited
for i in workers()
rmprocs(i)
end
I get the following output files:
myoutput.log:
connecting to worker 1 out of 4
connecting to worker 2 out of 4
connecting to worker 3 out of 4
connecting to worker 4 out of 4
cheops30410
cheops30410
cheops30414
cheops30414
job0000.out: julia_worker:9009#173.12.2.191
job0001.out: julia_worker:9010#173.12.2.191
job0002.out: julia_worker:9010#173.12.2.192
job0003.out: julia_worker:9009#173.12.2.192
I use a similar script as crstnbr, and in fact, I have also run into the issue srun: unrecognized option '--enable-threaded-blas=false'. I had to change src/slurm.jl has decribed here:
https://github.com/JuliaParallel/ClusterManagers.jl/issues/75#issuecomment-319919108
This change has been implemented in version 0.2.0 of ClusterManagers.jl, maybe you are still using version 0.1.2. If this is the case, then an upgrade might solve the issue.
Julia does not let you upgrade a package with local modification. Such packages will have a + sign following the version number.
Here are the steps to upgrade a dirty package if you are not interested in keeping the local modifications (in particular if the new version already includes the changes that you made to your local copy):
cd ~/.julia/v0.6/ClusterManagers/
git diff # show your modification
cp -R ~/.julia/v0.6/ClusterManagers/ ~/ClusterManagers.bak # backup copy
git checkout . # discard your modification
julia --eval 'Pkg.update("ClusterManagers")' # upgrade the package

How to share memory in cluster machine (qsub openmpi)

dear all!
I have a question about sharing memory in cluster. I am a new to cluster, and fail to solve my problem after trying about several weeks, so I look for help here, any suggestion would be grateful!
I want to use soapdenovo, a software that was used to assemble human genome to assemble my data. However, it failed in one step because shortage of memory (the memory is 512G in my machine). So I turned to cluster machine (which have three big nodes, each node have 512 memory too), and started to learn submit job with qsub. Considering that one node couldn't solve my problem, I googled and found that openmpi may help, but when I running openmpi with demo data, it seemed it only run the command several times. Then I found to use openmpi, the software must include library of openmpi, and I didn't know whether soapdenovo is support openmpi, I had asked the question but the author didn't give me answer yet. Suppose soapdenovo support the openmpi, how should I solve my problem. If it didn't support openmpi, can I use memory in different nodes to run the software?
The problem had tortured my so much, thanks for any help. Following is what had I do and some information about the cluster machine:
Install openmpi and submit the job
1) The script of job:
#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
#
export PATH=/tools/openmpi/bin:$PATH
export LD_LIBRARY_PATH=/tools/openmpi/lib:$LD_LIBRARY_PATH
soapPath="/tools/SOAPdenovo2/SOAPdenovo-63mer"
workPath="/NGS"
outputPath="assembly/soap/demo"
/tools/openmpi/bin/mpirun $soapPath all -s $workPath/$outputPath/config_file -K 23 -R -F -p 60 -V -o $workPath/$outputPath/graph_prefix > $workPath/$outputPath/ass.log 2> $workPath/$outputPath/ass.err
2) Submit the job:
qsub -pe orte 60 mpi.qsub
3) The log in ass.err
a) It seemed it run soapdenovo several times according to the log
cat ass.err | grep "Pregraph" | wc -l
60
b) detail information
less ass.err (it seemed it only run soapdenov several times, because when I run it in my machine, it would only output one Pregraph):
Version 2.04: released on July 13th, 2012
Compile Apr 27 2016 15:50:02
********************
Pregraph
********************
Parameters: pregraph -s /NGS/assembly/soap/demo/config_file -K 23 -p 16 -R -o /NGS/assembly/soap/demo/graph_prefix
In /NGS/assembly/soap/demo/config_file, 1 lib(s), maximum read length 35, maximum name length 256.
Version 2.04: released on July 13th, 2012
Compile Apr 27 2016 15:50:02
********************
Pregraph
********************
and so on
c) information of stdin
cat ass.log:
--------------------------------------------------------------------------
WARNING: A process refused to die despite all the efforts!
This process may still be running and/or consuming resources.
Host: smp03
PID: 75035
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 58 with PID 0 on node c0214.local exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
Information about cluster:
1) qconf -sql
all.q
smp.q
2) qconf -spl
mpi
mpich
orte
zhongxm
3) qconf -sp zhongxm
pe_name zhongxm
slots 999
user_lists NONE
xuser_lists NONE
start_proc_args /bin/true
stop_proc_args /bin/true
allocation_rule $fill_up
control_slaves TRUE
job_is_first_task FALSE
urgency_slots min
accounting_summary FALSE
4) qconf -sq smp.q
qname smp.q
hostlist #smp.q
seq_no 0
load_thresholds np_load_avg=1.75
suspend_thresholds NONE
nsuspend 1
suspend_interval 00:05:00
priority 0
min_cpu_interval 00:05:00
processors UNDEFINED
qtype BATCH INTERACTIVE
ckpt_list NONE
pe_list make zhongxm
rerun FALSE
slots 1
tmpdir /tmp
shell /bin/csh
prolog NONE
epilog NONE
shell_start_mode posix_compliant
starter_method NONE
suspend_method NONE
resume_method NONE
terminate_method NONE
notify 00:00:60
owner_list NONE
user_lists NONE
xuser_lists NONE
subordinate_list NONE
complex_values NONE
projects NONE
xprojects NONE
calendar NONE
initial_state default
s_rt INFINITY
h_rt INFINITY
s_cpu INFINITY
h_cpu INFINITY
s_fsize INFINITY
h_fsize INFINITY
s_data INFINITY
h_data INFINITY
s_stack INFINITY
h_stack INFINITY
s_core INFINITY
h_core INFINITY
s_rss INFINITY
h_rss INFINITY
s_vmem INFINITY
h_vmem INFINITY
5) qconf -sq all.q
qname all.q
hostlist #allhosts
seq_no 0
load_thresholds np_load_avg=1.75
suspend_thresholds NONE
nsuspend 1
suspend_interval 00:05:00
priority 0
min_cpu_interval 00:05:00
processors UNDEFINED
qtype BATCH INTERACTIVE
ckpt_list NONE
pe_list make zhongxm
rerun FALSE
slots 16,[c0219.local=32]
tmpdir /tmp
shell /bin/csh
prolog NONE
epilog NONE
shell_start_mode posix_compliant
starter_method NONE
suspend_method NONE
resume_method NONE
terminate_method NONE
notify 00:00:60
owner_list NONE
user_lists mobile
xuser_lists NONE
subordinate_list NONE
complex_values NONE
projects NONE
xprojects NONE
calendar NONE
initial_state default
s_rt INFINITY
h_rt INFINITY
s_cpu INFINITY
h_cpu INFINITY
s_fsize INFINITY
h_fsize INFINITY
s_data INFINITY
h_data INFINITY
s_stack INFINITY
h_stack INFINITY
s_core INFINITY
h_core INFINITY
s_rss INFINITY
h_rss INFINITY
s_vmem INFINITY
h_vmem INFINITY
According to https://hpc.unt.edu/soapdenovo the software doesn't support MPI:
This code is NOT compiled with MPI, and should only be used in parallel on a SINGLE node, via a threaded model.
So, you can't just start the software with mpiexec on cluster to have access to more memory. Cluster machines are connected with non-coherent networks (Ethernet, Infiniband) which are slower than memory bus, and PCs in cluster do not share their memory. Clusters use MPI libraries (OpenMPI or MPICH) to work with network, and all requests between nodes is explicit: program calls MPI_Send in one process and MPI_Recv in other. There are also one-way calls like MPI_Put/MPI_Get to access remote memory (RDMA - remote direct memory access), but this is not the same as local memory.
osgx, thank you for your reply very much and sorry for the delay of this message.
Since I don't major in computer, I think I can't understand some glossary very well, like ELF. So there are some new questions and I list my question as follow, thanks for help advace:
1) When I "ldd SOAPdenovo-63mer", it outputed "not a dynamic executable", did this mean "the code is not complied with MPI" that you mentioned?
2) In short, I can't solve the problem with the cluster, and I have to look for a machine with more than 512G memory?
3) Also, I used another software called ALLPATHS-LG (http://www.broadinstitute.org/software/allpaths-lg/blog/) that was also failed for shortage of memory, and according to FAQ C1 (http://www.broadinstitute.org/software/allpaths-lg/blog/?page_id=336), what "it uses share memory parallelization" mean, did it means it can use memory in cluster, or only memory in a node, and I have to find a machine with enough memory?
C1. Can I run ALLPATHS-LG on a cluster?
You can, but it will only use one machine, not the entire cluster. That machine would need to have enough memory to fit the entire assembly. ALLPATHS-LG does not support distributed computing using MPI, instead it uses Shared Memory Parallelization.
By the way, this is first time I posted here, I think I should use commit to reply, considering so many words, I use "Answer Your Question".

Resources