I have 2 GPUs on my server which I want to run different training tasks on them.
On the first task, trying to force the Tensorflow to use only one GPU, I added the following code at the top of my script :
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
After running the first task, when I try to run the second task on the other GPU, (with the same 2 lines of code) I get the error "No device GPU:1".
What is the problem?
Q : "What is the problem?"
The system needs to see the cards - validate the current state of the server, using the call to ( hwloc-tool ) lstopo :
$ lstopo --only osdev
GPU L#0 "card1"
GPU L#1 "renderD128"
GPU L#2 "card2"
GPU L#3 "renderD129"
GPU L#4 "card3"
GPU L#5 "renderD130"
GPU L#6 "card0"
GPU L#7 "controlD64"
Block(Disk) L#8 "sda"
Block(Disk) L#9 "sdb"
Net L#10 "eth0"
Net L#11 "eno1"
GPU L#12 "card4"
GPU L#13 "renderD131"
GPU L#14 "card5"
GPU L#15 "renderD132"
If showing more than just an above mentioned card0, proceed with proper naming / id#-sandbe sure to set it before doing any other import-s, like that of pycuda and tensorflow.
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1' # MUST PRECEDE ANY IMPORT-s
#---------------------------------------------------------------------
import pycuda as pyCUDA
import tensorflow as tf
...
#----------------------------------------------------------------------
Related
I am using coarray to parallelize a fortran code. The code is working properly in my pc (ubuntu 18, OpenCoarrays 2.0.0). However when I run the code on the cluster (centos) it crashes with the following error:
=====================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= EXIT CODE: 9
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
=====================================================================================
APPLICATION TERMINATED WITH THE EXIT STRING: Killed (signal 9)
Error: Command:
/APP/enhpc/mpi/mpich2-gcc-hd/bin/mpiexec -n 10 -machinefile machines ./IPS
failed to run
using top command during running the code I found out that memory increases when the code is running. The problem is coming from where I copy some data from another processor:
for example a(:)=b(:)[k]
Since the code is running on my pc properly what can be the reason for memory increase in cluster?
I have to mention that I am running the code with cores on a single node.
It increases continuously. It is a centos cluster. I do not know what kind of architecture it has. I am using OpenCoarrays v2.9.1 which is using coarray fortran (CAF) for compiling. Also GNU v 10.1. I wrote a simple code as follows:
program hello_image
integer::m,n,i
integer,allocatable:: A(:)[:],B(:)
m=1e3
n=1e6
allocate(A(n)[*],B(n))
A(:)=10
B(:)=20
write(*,*) j,this_image()
do j=1,m
Do i=1,n
B(i)=A(i)[3] ! this line means that the data is copied from processor 3 to other processors
enddo
write(*,*) j,this_image()
enddo
end program hello_image
When I am running this code in my pc the memory usage for all clusters are a constant value of 0.1% and they are not increasing. However, when I run the same code in the cluster the memory usage is continously increasing.
Output from My pc:
output from cluster:
I'm creating a Snakemake workflow that will wrap up some of the tools in the nvidia clara parabricks pipelines. Because these tools run on GPU's, they typically can only handle one sample at a time, otherwise the GPU will run out of memory. However, Snakemake shoves all the samples through to Parabricks at one time - seemingly unaware of the GPU memory limits. One solution would be to tell Snakemake to process one sample at a time, thus the question:
How do I get Snakemake to process one sample at a time?
Because parabricks is a licensed product (and therefore not necessarily reproducible), I will show an example of the parabricks rule I am trying to run (pbrun fastq2bam), as well as a minimal reproducible example using open source software (fastqc) which we can work on/from
My parabricks rule - pbrun fastq2bam
Snakefile:
# Define samples from fastq dir using wildcards
SAMPLES, = glob_wildcards("../fastq/{sample}_1.filt.fastq.gz")
rule all:
input:
expand("{sample}_recalibrated.bam", sample = SAMPLES)
rule pbrun_fq2bam:
input:
R1 = "../fastq/{sample}_1.filt.fastq.gz",
R2 = "../fastq/{sample}_2.filt.fastq.gz"
output:
bam = "{sample}_recalibrated.bam",
recal = "{sample}_recal.txt"
shell:
"pbrun fq2bam --ref human_g1k_v37_decoy.fasta --in-fq {input.R1} {input.R2} --knownSites dbsnp_138.b37.vcf --out-bam {output.bam} --out-recal {output.recal}"
Run command:
snakemake -j 32 --use-conda
Error when four samples/exomes are present in the ../fastq/ directory:
GPU-BWA mem
ProgressMeter Reads Base Pairs Aligned
cudaSafeCall() failed at ParaBricks/src/samGenerator.cu:782 : out of memory
cudaSafeCall() failed at ParaBricks/src/samGenerator.cu:782 : out of memory
cudaSafeCall() failed at ParaBricks/src/chainGenerator.cu:185 : out of memory
cudaSafeCall() failed at ParaBricks/src/chainGenerator.cu:185 : out of memory
cudaSafeCall() failed at ParaBricks/src/chainGenerator.cu:185 : out of memory
cudaSafeCall() failed at ParaBricks/src/chainGenerator.cu:183 : out of memory
cudaSafeCall() failed at ParaBricks/src/chainGenerator.cu:185 : out of memory
cudaSafeCall() failed at ParaBricks/src/chainGenerator.cu:183 : out of memory
Minimal example - fastqc
Get data:
mkdir ../fastq/
gsutil cp -r gs://genomics-public-data/gatk-examples/example1/NA19913/* ../fastq/
Snakefile:
SAMPLES, = glob_wildcards("../fastq/{sample}_1.filt.fastq.gz")
rule all:
input:
expand(["{sample}_1.filt_fastqc.html", "{sample}_2.filt_fastqc.html"], sample = SAMPLES),
expand(["{sample}_1.filt_fastqc.zip", "{sample}_2.filt_fastqc.zip"], sample = SAMPLES)
rule fastqc:
input:
R1 = "../fastq/{sample}_1.filt.fastq.gz",
R2 = "../fastq/{sample}_2.filt.fastq.gz"
output:
html = ["{sample}_1.filt_fastqc.html", "{sample}_2.filt_fastqc.html"],
zip = ["{sample}_1.filt_fastqc.zip", "{sample}_2.filt_fastqc.zip"]
conda:
"fastqc.yaml"
shell:
"fastqc {input.R1} {input.R2} --outdir ."
fastqc.yaml:
channels:
- bioconda
- conda-forge
- defaults
dependencies:
- bioconda::fastqc =0.11.9
Run command:
snakemake -j 32 --use-conda
Thanks in advance for any pointers!!
I would like to expand on the answer of #jafors. Probably what is better to do instead of limiting the memory, you can make a gpu resource:
rule pbrun_fq2bam:
...
resources:
gpu=1
And then run your snakemake with --resources gpu=1
This case you can still use memory and threads for other rules and every resource describes what it is.
You could try adding threads: 32 to your rule, so snakemake will use all given cores on one rule iteration/sample.
Memory can also be restricted using sth. like
resources:
mem_mb=100
in the rule and --resources mem_mb=100 in the snakemake call. This would restrict the rule to use at most 100MB memory.
I am trying to understand how can I build a parallel computing pipeline for multiple subprocesses.
As I see, each subprocess block waits for the previous code block to run, whereas I have a pipeline which does not have a dependency for the previous run, and it can be handled in parallel. I want to understand whether this is possible, and if so, a sample syntax for showing how to do that would be a great help! Thanks in advance.
import sys
import os
import subprocess
subprocess.run("python pipelinecode1.py".split() +
[run_date, this_wk, last_wk, prev_wk], shell=True)
subprocess.run("python pipelinecode2.py".split() +
[run_date, this_wk, last_wk, prev_wk], shell=True)
subprocess.run("python pipelinecode3.py".split() +
[run_date, this_wk, last_wk, prev_wk], shell=True)
The MCVE as-is shows zero dependency on the python-interpreter, so the most efficient step for running a set of mutualy independent tasks ( not a pipeline, where one-step-after-another order of processing steps "forms" the "pipeline" ) is GNU parallel:
$ parallel python {} run_date this_wk last_wk prev_wk ::: pipelinecode1.py \
pipelinecode2.py \
pipelinecode3.py
This way you do not waste CPU / cache resources and escape from the blocking and GIL-lock re-introduced re-[SERIAL]-isation of the code-execution without any add-on overhead costs.
For all configurables available read respective details in man parallel
I tried running the test program for GPU usage:
from theano import function, config, shared, tensor, sandbox
import numpy
import time
vlen=10*30*768 #10x #coresx #threadspercore
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, tensor.Elemwise) and ('Gpu' not in type(x.op).__name__)
for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gnu')
It only shows this (even after installing libgpuarray):
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took 2.723539 seconds
Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
1.62323285]
Used the cpu
I would like to know how to utilise the integrated GPU of MacBook Air (Early 2014).
My processor has Intel HD Graphics 5000 -- not NVIDIA and hence not CUDA compatible Many links suggest usage of OpenCL. This was also supposed to be pre-installed with OS-X. But I can't make head or tail out of the links in the web.
I could not find much help of how to set up Theano in the docs either.
I just need to make Theano use my Mac's integrated GPU. Is this possible? If so, how? What are its prerequisites?
THEANO_FLAGS=device=opencl0:1 python ~/check_GPU.py
I have this little nonsense script here which I am executing in MATLAB R2013b:
clear all;
n = 2000;
times = 50;
i = 0;
tCPU = tic;
disp 'CPU::'
A = rand(n, n);
B = rand(n, n);
disp '::Go'
for i = 0:times
CPU = A * B;
end
tCPU = toc(tCPU);
tGPU = tic;
disp 'GPU::'
A = gpuArray(A);
B = gpuArray(B);
disp '::Go'
for i = 0:times
GPU = A * B ;
end
tGPU = toc(tGPU);
fprintf('On CPU: %.2f sec\nOn GPU: %.2f sec\n', tCPU, tGPU);
Unfortunately after execution I receive a message from Windows saying: "Display driver stopped working and has recovered.".
Which I assume means that Windows did not get response from my graphic cards driver or something. The script returned without errors:
>> test
CPU::
::Go
GPU::
::Go
On CPU: 11.01 sec
On GPU: 2.97 sec
But no matter if the GPU runs out of memory or not, MATLAB is not able to use the GPU device before I restarted it. If I don't restart MATLAB I receive just a message from CUDA:
>> test
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
Warning: An unexpected error occurred during CUDA
execution. The CUDA error was:
CUDA_ERROR_LAUNCH_TIMEOUT
> In test at 1
CPU::
::Go
GPU::
Error using gpuArray
An unexpected error occurred during CUDA execution.
The CUDA error was:
the launch timed out and was terminated
Error in test (line 21)
A = gpuArray(A);
Does anybody know how to avoid this issue or what I am doing wrong here?
If needed, my GPU Device:
>> gpuDevice
ans =
CUDADevice with properties:
Name: 'GeForce GTX 660M'
Index: 1
ComputeCapability: '3.0'
SupportsDouble: 1
DriverVersion: 6
ToolkitVersion: 5
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 2.1475e+09
FreeMemory: 1.9037e+09
MultiprocessorCount: 2
ClockRateKHz: 950000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
The key piece of information is this part of the gpuDevice output:
KernelExecutionTimeout: 1
This means that the host display driver is active on the GPU you are running the compute jobs on. The NVIDIA display driver contains a watchdog timer which kills any task which takes more than a predefined amount of time without yielding control back to the driver for screen refresh. This is intended to prevent the situation where a long running or stuck compute job renders the machine unresponsive by freezing the display. The runtime of your Matlab script is clearly exceeding the display driver watchdog timer limit. Once that happens, the the compute context held on the device is destroyed and Matlab can no longer operate with the device. You might be able to reinitialise the context by calling reset, which I guess will run cudaDeviceReset() under the cover.
There is a lot of information about this watchdog timer on the interweb - for example this Stack Overflow question. The solution for how to modify this timeout is dependent on your OS and hardware. The simplest way to avoid this is to not run CUDA code on a display GPU, or increase the granularity of your compute jobs so that no one operation has a runtime which exceeds the timeout limit. Or just write faster code...