I am having an issue with CPU affinity and linear integer programming in MOSEK. My program parallelizes using the multiprocessing module in Python, thus MOSEK is running concurrently on each process. The machine has 48 cores so I run 48 concurrent processes using the Pool class. Their documentation states that the API is thread safe.
After starting the program, below is the output from top. It shows that ~50% of the CPU is idle. Shown is only the first 20 lines of the top output.
top - 22:04:42 up 5 days, 14:38, 3 users, load average: 10.67, 13.65, 6.29
Tasks: 613 total, 47 running, 566 sleeping, 0 stopped, 0 zombie
%Cpu(s): 46.3 us, 3.8 sy, 0.0 ni, 49.2 id, 0.7 wa, 0.0 hi, 0.0 si, 0.0 st
GiB Mem: 503.863 total, 101.613 used, 402.250 free, 0.482 buffers
GiB Swap: 61.035 total, 0.000 used, 61.035 free. 96.250 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
115517 njmeyer 20 0 171752 27912 11632 R 98.7 0.0 0:02.52 python
115522 njmeyer 20 0 171088 27472 11632 R 98.7 0.0 0:02.79 python
115547 njmeyer 20 0 171140 27460 11568 R 98.7 0.0 0:01.82 python
115550 njmeyer 20 0 171784 27880 11568 R 98.7 0.0 0:01.64 python
115540 njmeyer 20 0 171136 27456 11568 R 92.5 0.0 0:01.91 python
115551 njmeyer 20 0 371636 31100 11632 R 92.5 0.0 0:02.93 python
115539 njmeyer 20 0 171132 27452 11568 R 80.2 0.0 0:01.97 python
115515 njmeyer 20 0 171748 27908 11632 R 74.0 0.0 0:03.02 python
115538 njmeyer 20 0 171128 27512 11632 R 74.0 0.0 0:02.51 python
115558 njmeyer 20 0 171144 27528 11632 R 74.0 0.0 0:02.28 python
115554 njmeyer 20 0 527980 28728 11632 R 67.8 0.0 0:02.15 python
115524 njmeyer 20 0 527956 28676 11632 R 61.7 0.0 0:02.34 python
115526 njmeyer 20 0 527956 28704 11632 R 61.7 0.0 0:02.80 python
I checked the MOSEK parameters section of the documentation and I didn't see anything related to CPU affinity. They have some flags related to multithreading within the optimizer. These flags are set to off as default, and when redundantly setting it to off there is no change.
I checked the cpu affinity of the running python jobs and many of them are bound to the same cpu. But, the weird part is I can't set the cpu affinity, or at least it appears to be changed again soon after I change it.
I picked one of the jobs and set the cpu affinity by running taskset -p 0xFFFFFFFFFFFF 115526. I do this 10 times with 1 second in between. Here is the cpu affinity mask after each taskset call.
pid 115526's current affinity mask: 10
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 7
pid 115526's current affinity mask: 800000000000
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
pid 115526's current affinity mask: 800000000000
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
pid 115526's current affinity mask: ffffffffffff
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
pid 115526's current affinity mask: ffffffffffff
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
pid 115526's current affinity mask: ffffffffffff
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
pid 115526's current affinity mask: 200000000000
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 47
pid 115526's current affinity mask: ffffffffffff
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
pid 115526's current affinity mask: 800000000000
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
pid 115526's current affinity mask: 800000000000
pid 115526's new affinity mask: ffffffffffff
pid 115526's current affinity list: 0-47
It seems like something is continually changing the cpu affinity during run time.
I have also tried setting the cpu affinity of the parent process, but it has the same effect.
Here is the code I am running.
import mosek
import sys
import cPickle as pickle
import multiprocessing
import time
def mosekOptim(aCols,aVals,b,c,nCon,nVar,numTrt):
"""Solve the linear integer program.
Solve the program
max c' x
s.t. Ax <= b
"""
## setup mosek
with mosek.Env() as env, env.Task() as task:
task.appendcons(nCon)
task.appendvars(nVar)
inf = float("inf")
## c
for j,cj in enumerate(c):
task.putcj(j,cj)
## bounds on A
bkc = [mosek.boundkey.fx] + [mosek.boundkey.up
for i in range(nCon-1)]
blc = [float(numTrt)] + [-inf for i in range(nCon-1)]
buc = b
## bounds on x
bkx = [mosek.boundkey.ra for i in range(nVar)]
blx = [0.0]*nVar
bux = [1.0]*nVar
for j,a in enumerate(zip(aCols,aVals)):
task.putarow(j,a[0],a[1])
for j,bc in enumerate(zip(bkc,blc,buc)):
task.putconbound(j,bc[0],bc[1],bc[2])
for j,bx in enumerate(zip(bkx,blx,bux)):
task.putvarbound(j,bx[0],bx[1],bx[2])
task.putobjsense(mosek.objsense.maximize)
## integer type
task.putvartypelist(range(nVar),
[mosek.variabletype.type_int
for i in range(nVar)])
task.optimize()
task.solutionsummary(mosek.streamtype.msg)
prosta = task.getprosta(mosek.soltype.itg)
solsta = task.getsolsta(mosek.soltype.itg)
xx = mosek.array.zeros(nVar,float)
task.getxx(mosek.soltype.itg,xx)
if solsta not in [ mosek.solsta.integer_optimal,
mosek.solsta.near_integer_optimal ]:
print "".join(mosekMsg)
raise ValueError("Non optimal or infeasible.")
else:
return xx
def reps(secs,*args):
start = time.time()
while time.time() - start < secs:
for i in range(100):
mosekOptim(*args)
def main():
with open("data.txt","r") as f:
data = pickle.loads(f.read())
args = (60,) + data
pool = multiprocessing.Pool()
jobs = []
for i in range(multiprocessing.cpu_count()):
jobs.append(pool.apply_async(reps,args=args))
pool.close()
pool.join()
if __name__ == "__main__":
main()
The code unpickles data I precomputed. These objects are the contsraints and coefficients for the linear program. I have the code and this data file hosted in this repository.
Has anyone else experience this behavior with MOSEK? Any suggestions for how to proceed?
I contacted support, and they suggested setting MSK_IPAR_NUM_THREADS to 1. My problem takes fractions of a second to solve, so it never looked like it was using multiple cores. Should have checked the docs for default values.
In my code, I added task.putintparam(mosek.iparam.num_threads,1) right after the with statement. This fixed the problem.
Related
Does anyone have any ideas on why these logs are giving these errors - Out of memory: kill process ... nginx invoked oom-killer?
Lately, our cms has been going down and we have to manually restart the server in AWS and we're not sure what is happening to cause this behavior. log errors
Here are the exact lines of code that repeated 33 times while the server was down:
Out of memory: kill process 15654 (ruby) score 338490 or a child
Killed process 15654 (ruby) vsz:1353960kB, anon-rss:210704kB, file-rss:0kB
nginx invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
nginx cpuset=/ mems_allowed=0
Pid: 8729, comm: nginx Tainted: G W 2.6.35.14-97.44.amzn1.x86_64 #1
Call Trace:
[<ffffffff8108e638>] ? cpuset_print_task_mems_allowed+0x98/0xa0
[<ffffffff810bb157>] dump_header.clone.1+0x77/0x1a0
[<ffffffff81318d49>] ? _raw_spin_unlock_irqrestore+0x19/0x20
[<ffffffff811ab3af>] ? ___ratelimit+0x9f/0x120
[<ffffffff810bb2f6>] oom_kill_process.clone.0+0x76/0x140
[<ffffffff810bb4d8>] __out_of_memory+0x118/0x190
[<ffffffff810bb5d2>] out_of_memory+0x82/0x1c0
[<ffffffff810beb89>] __alloc_pages_nodemask+0x689/0x6a0
[<ffffffff810e7864>] alloc_pages_current+0x94/0xf0
[<ffffffff810b87ef>] __page_cache_alloc+0x7f/0x90
[<ffffffff810c15e0>] __do_page_cache_readahead+0xc0/0x200
[<ffffffff810c173c>] ra_submit+0x1c/0x20
[<ffffffff810b9f63>] filemap_fault+0x3e3/0x430
[<ffffffff810d023f>] __do_fault+0x4f/0x4b0
[<ffffffff810d2774>] handle_mm_fault+0x1b4/0xb40
[<ffffffff81007682>] ? check_events+0x12/0x20
[<ffffffff81006f1d>] ? xen_force_evtchn_callback+0xd/0x10
[<ffffffff81007682>] ? check_events+0x12/0x20
[<ffffffff8131c752>] do_page_fault+0x112/0x310
[<ffffffff813194b5>] page_fault+0x25/0x30
Mem-Info:
Node 0 DMA per-cpu:
CPU 0: hi: 0, btch: 1 usd: 0
CPU 1: hi: 0, btch: 1 usd: 0
CPU 2: hi: 0, btch: 1 usd: 0
CPU 3: hi: 0, btch: 1 usd: 0
Node 0 DMA32 per-cpu:
CPU 0: hi: 186, btch: 31 usd: 35
CPU 1: hi: 186, btch: 31 usd: 0
CPU 2: hi: 186, btch: 31 usd: 30
CPU 3: hi: 186, btch: 31 usd: 0
Node 0 Normal per-cpu:
CPU 0: hi: 186, btch: 31 usd: 202
CPU 1: hi: 186, btch: 31 usd: 30
CPU 2: hi: 186, btch: 31 usd: 59
CPU 3: hi: 186, btch: 31 usd: 140
active_anon:3438873 inactive_anon:284496 isolated_anon:0
active_file:0 inactive_file:62 isolated_file:64
unevictable:0 dirty:0 writeback:0 unstable:0
free:16763 slab_reclaimable:1340 slab_unreclaimable:2956
mapped:29 shmem:12 pagetables:11130 bounce:0
Node 0 DMA free:7892kB min:16kB low:20kB high:24kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15772kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
lowmem_reserve[]: 0 4024 14836 14836
Node 0 DMA32 free:47464kB min:4224kB low:5280kB high:6336kB active_anon:3848564kB inactive_anon:147080kB active_file:0kB inactive_file:8kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:4120800kB mlocked:0kB dirty:0kB writeback:0kB mapped:60kB shmem:0kB slab_reclaimable:28kB slab_unreclaimable:268kB kernel_stack:48kB pagetables:8604kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:82 all_unreclaimable? yes
lowmem_reserve[]: 0 0 10811 10811
Node 0 Normal free:11184kB min:11352kB low:14188kB high:17028kB active_anon:9906928kB inactive_anon:990904kB active_file:0kB inactive_file:1116kB unevictable:0kB isolated(anon):0kB isolated(file):256kB present:11071436kB mlocked:0kB dirty:0kB writeback:0kB mapped:56kB shmem:48kB slab_reclaimable:5332kB slab_unreclaimable:11556kB kernel_stack:2400kB pagetables:35916kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:32 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
Node 0 DMA: 3*4kB 1*8kB 2*16kB 1*32kB 0*64kB 3*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 1*4096kB = 7892kB
Node 0 DMA32: 62*4kB 104*8kB 53*16kB 29*32kB 27*64kB 7*128kB 2*256kB 3*512kB 1*1024kB 3*2048kB 8*4096kB = 47464kB
Node 0 Normal: 963*4kB 0*8kB 5*16kB 4*32kB 7*64kB 5*128kB 6*256kB 1*512kB 2*1024kB 1*2048kB 0*4096kB = 11292kB
318 total pagecache pages
0 pages in swap cache
Swap cache stats: add 0, delete 0, find 0/0
Free swap = 0kB
Total swap = 0kB
3854801 pages RAM
86406 pages reserved
14574 pages shared
3738264 pages non-shared
It's happening because your server is running out of memory. To solve this problem you have 2 options.
Update your Server's Ram or use SWAP (But upgrading Physical ram is recommended instead of using SWAP)
Limit Nginx ram use.
To limit nginx ram use open the /etc/nginx/nginx.conf file and add client_max_body_size <your_value_here> under the http configuration block. For example:
worker_processes 1;
http {
client_max_body_size 10M;
...
}
Note: use M for MB, G for GB and T for TB
I am trying to parallelize Monte Carlo simulation by using OpenCL. I use the MWC64X as a uniform random number generator. The code runs well on different Intel CPUs, since the output of parallel computation is very close to the sequential one.
Using OpenCL device: Intel(R) Xeon(R) CPU E5-2630L v3 # 1.80GHz
Literal influence running time: 0.029048 seconds r1 seqInfl= 0.4771
Literal influence running time: 0.029762 seconds r2 seqInfl= 0.4771
Literal influence running time: 0.029742 seconds r3 seqInfl= 0.4771
Literal influence running time: 0.02971 seconds ra seqInfl= 0.4771
Literal influence running time: 0.029225 seconds trust1-57 seqInfl= 0.6001
Literal influence running time: 0.04992 seconds trust110-1 seqInfl= 0
Literal influence running time: 0.034636 seconds trust4-57 seqInfl= 0
Literal influence running time: 0.049079 seconds trust57-110 seqInfl= 0
Literal influence running time: 0.024442 seconds trust57-4 seqInfl= 0.8026
Literal influence running time: 0.04946 seconds trust33-1 seqInfl= 0
Literal influence running time: 0.049071 seconds trust57-33 seqInfl= 0
Literal influence running time: 0.053117 seconds trust4-1 seqInfl= 0.1208
Literal influence running time: 0.051642 seconds trust57-1 seqInfl= 0
Literal influence running time: 0.052052 seconds trust57-64 seqInfl= 0
Literal influence running time: 0.052118 seconds trust64-1 seqInfl= 0
Literal influence running time: 0.051998 seconds trust57-7 seqInfl= 0
Literal influence running time: 0.052069 seconds trust7-1 seqInfl= 0
Total number of literals: 17
Sequential influence running time: 0.71728 seconds
Sequential maxInfluence Literal: trust57-4 0.8026
index1= 17 size= 51 dim1_size= 6
sum0:4781 influence0:0.478100 sum2:4781 influence2:0.478100 sum6:0 influence6:0.000000 sum10:0 sum12:0 influence12:0.000000 sum7:0 influence7:0.000000 influence10:0.000000 sum4:5962 influence4:0.596200 sum8:7971 influence8:0.797100 sum1:4781 influence1:0.478100 sum3:4781 influence3:0.478100 sum13:0 influence13:0.000000 sum11:1261 influence11:0.126100 sum9:0 influence9:0.000000 sum14:0 influence14:0.000000 sum5:0 influence5:0.000000 sum15:0 influence15:0.000000 sum16:0 influence16:0.000000
Parallel influence running time: 0.054391 seconds
Parallel maxInfluence Literal: trust57-4 Infl=0.7971
However, when I run the code on GeForce GTX 1080 Ti, with NVIDIA-SMI 430.40 and CUDA 10.1 and OpenCL 1.2 CUDA installed, the output is as below:
Using OpenCL device: GeForce GTX 1080 Ti
Influence:
Literal influence running time: 0.011119 seconds r1 seqInfl= 0.4771
Literal influence running time: 0.011238 seconds r2 seqInfl= 0.4771
Literal influence running time: 0.011408 seconds r3 seqInfl= 0.4771
Literal influence running time: 0.01109 seconds ra seqInfl= 0.4771
Literal influence running time: 0.011132 seconds trust1-57 seqInfl= 0.6001
Literal influence running time: 0.018978 seconds trust110-1 seqInfl= 0
Literal influence running time: 0.013093 seconds trust4-57 seqInfl= 0
Literal influence running time: 0.018968 seconds trust57-110 seqInfl= 0
Literal influence running time: 0.009105 seconds trust57-4 seqInfl= 0.8026
Literal influence running time: 0.018753 seconds trust33-1 seqInfl= 0
Literal influence running time: 0.018583 seconds trust57-33 seqInfl= 0
Literal influence running time: 0.02005 seconds trust4-1 seqInfl= 0.1208
Literal influence running time: 0.01957 seconds trust57-1 seqInfl= 0
Literal influence running time: 0.019686 seconds trust57-64 seqInfl= 0
Literal influence running time: 0.019632 seconds trust64-1 seqInfl= 0
Literal influence running time: 0.019687 seconds trust57-7 seqInfl= 0
Literal influence running time: 0.019859 seconds trust7-1 seqInfl= 0
Total number of literals: 17
Sequential influence running time: 0.272032 seconds
Sequential maxInfluence Literal: trust57-4 0.8026
index1= 17 size= 51 dim1_size= 6
sum0:10000 sum1:10000 sum2:10000 sum3:10000 sum4:10000 sum5:0 sum6:0 sum7:0 sum8:10000 sum9:0 sum10:0 sum11:0 sum12:0 sum13:0 sum14:0 sum15:0 sum16:0
Parallel influence running time: 0.193581 seconds
The "Influence" value equals sum*1.0/10000, thus the parallel influence only composes of 1 and 0, which is incorrect (in GPU runs) and doesn't happen when parallelizing on a Intel CPU.
When I check the output of the random number generator if(flag==0) printf("randint=%u",randint);, it seems the outputs are all zero on GPU. Below is the clinfo and the .cl code:
Device Name GeForce GTX 1080 Ti
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 430.40
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 68:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 28
Max clock frequency 1721MHz
Compute Capability (NV) 6.1
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple 32
Warp size (NV) 32
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 11720130560 (10.92GiB)
Error Correction support No
Max memory allocation 2930032640 (2.729GiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Global Memory cache type Read/Write
Global Memory cache size 458752 (448KiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 16384x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 16
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Max size of kernel argument 4352 (4.25KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 2
printf() buffer size 1048576 (1024KiB)
#define N 70 // N > index, which is the total number of literals
#define BASE 4294967296UL
//! Represents the state of a particular generator
typedef struct{ uint x; uint c; } mwc64x_state_t;
enum{ MWC64X_A = 4294883355U };
enum{ MWC64X_M = 18446383549859758079UL };
void MWC64X_Step(mwc64x_state_t *s)
{
uint X=s->x, C=s->c;
uint Xn=MWC64X_A*X+C;
uint carry=(uint)(Xn<C); // The (Xn<C) will be zero or one for scalar
uint Cn=mad_hi(MWC64X_A,X,carry);
s->x=Xn;
s->c=Cn;
}
//! Return a 32-bit integer in the range [0..2^32)
uint MWC64X_NextUint(mwc64x_state_t *s)
{
uint res=s->x ^ s->c;
MWC64X_Step(s);
return res;
}
__kernel void setInfluence(const int literals, const int size, const int dim1_size, __global int* lambdas, __global float* lambdap, __global int* dim2_size, __global float* influence){
int flag=get_global_id(0);
int sum=0;
int count=10000;
int assignment[N];
//or try to get newlambda like original version does
if(flag < literals){
mwc64x_state_t rng;
for(int i=0; i<count; i++){
for(int j=0; j<size; j++){
uint randint=MWC64X_NextUint(&rng);
float rand=randint*1.0/BASE;
//if(flag==0)
// printf("randint=%u",randint);
if(lambdap[j]<rand)
assignment[lambdas[j]]=0;
else
assignment[lambdas[j]]=1;
}
//the true case
assignment[flag]=1;
int valuet=0;
int index=0;
for(int m=0; m<dim1_size; m++){
int valueMono=1;
for(int n=0; n<dim2_size[m]; n++){
if(assignment[lambdas[index+n]]==0){
valueMono=0;
index+=dim2_size[m];
break;
}
}
if(valueMono==1){
valuet=1;
break;
}
}
//the false case
assignment[flag]=0;
int valuef=0;
index=0;
for(int m=0; m<dim1_size; m++){
int valueMono=1;
for(int n=0; n<dim2_size[m]; n++){
if(assignment[lambdas[index+n]]==0){
valueMono=0;
index+=dim2_size[m];
break;
}
}
if(valueMono==1){
valuef=1;
break;
}
}
sum += valuet-valuef;
}
influence[flag] = 1.0*sum/count;
printf("sum%d:%d\t", flag, sum);
}
}
What might be the problem when running the code on GPU? Is it MWC64X? According to its author, it can perform well on NVIDIA GPUs. If so, how can I fix it; if not, what might be the problem?
(This started out as a comment, it turns out this was the source of the problem so I'm turning it into an answer.)
You're not initialising your mwc64x_state_t rng; variable before reading from it, so any results will be undefined:
mwc64x_state_t rng;
for(int i=0; i<count; i++){
for(int j=0; j<size; j++){
uint randint=MWC64X_NextUint(&rng);
Where MWC64X_NextUint() immediately reads from the rng state before updating it:
uint MWC64X_NextUint(mwc64x_state_t *s)
{
uint res=s->x ^ s->c;
Note that you will probably want to seed your RNG differently for each work-item, otherwise you will get nasty correlation artifacts in your results.
All use-cases of a pseudo-random number are a next-level challenge in true-[PARALLEL] computing platforms (not languages, platforms).
Either, there is some source-of-randomness, which gets us into a trouble once massively-parallel requests are to get fair-handled in a truly [PARALLEL] fashion (here, hardware resources may help, yet at a cost of not being able to reproduce the same behaviour "outside" of this very same platform ( and moment-in-time, if such a source is not software-operated with some seed-injection feature, that may setup the "just"-pseudo-random algorithm that creates a pure-[SERIAL] sequence-of-produced "just"-pseudo-random numbers ) )
Or,there is some "shared"-generator of pseudo-random numbers, which enjoys of a higher level of system-wide level-of-entropy (which is good for the resulting "quality" of pseudo-randomness) but at a cost of pure-serial dependence (no parallel execution possible,serial sequence gets served one after another in a sequential manner) and having close to zero chance for repeatable runs (a must for reproducible science) providing repeatably same sequences, needed for testing and for method-validation cases.
RESUME :
The code may employ a work-item-"private" pseudo-random generating function(s) ( privacy is a must for the sake of both the parallel code-execution and the mutual independence (non-intervening processes) of generating these pseudo-random numbers ) , yet each of instances must be a) independently initialised, so as to provide the expected level of randomness achievable in parallelised code-runs and b) any such initialisation ought be performed in a repeatably reproducible manner, for the sake of running the test on different times, often using different OpenCL target computing-platforms.
For __kernel-s, that do not rely on hardware-specific sources-of-randomness, meeting the conditions a && b will suffice for receiving repeatably reproducible (same) results for testing "in vitro" and thus providing a reasonably random method for generating results during generic production-level use-case code-runs "in vivo".
The comparison of net-run-times (benchmarked above) seems to show that Amdahl's law add-on overhead costs plus a tail-end effect of the atomicity-of-work have finally decided the net-run-time was ~ 3.6x faster on XEON compared to GPU:
index1 = 17
size = 51
dim1_size = 6
sum0: 4781 influence0: 0.478100
sum2: 4781 influence2: 0.478100
sum6: 0 influence6: 0.000000
sum10: 0 influence10: 0.000000
sum12: 0 influence12: 0.000000
sum7: 0 influence7: 0.000000
sum4: 5962 influence4: 0.596200
sum8: 7971 influence8: 0.797100
sum1: 4781 influence1: 0.478100
sum3: 4781 influence3: 0.478100
sum13: 0 influence13: 0.000000
sum11: 1261 influence11: 0.126100
sum9: 0 influence9: 0.000000
sum14: 0 influence14: 0.000000
sum5: 0 influence5: 0.000000
sum15: 0 influence15: 0.000000
sum16: 0 influence16: 0.000000
Parallel influence running time: 0.054391 seconds on XEON E5-2630L v3 # 1.80GHz using OpenCL
|....
index1 = 17 |....
size = 51 |....
dim1_size = 6 |....
sum0: 10000 |....
sum1: 10000 |....
sum2: 10000 |....
sum3: 10000 |....
sum4: 10000 |....
sum5: 0 |....
sum6: 0 |....
sum7: 0 |....
sum8: 10000 |....
sum9: 0 |....
sum10: 0 |....
sum11: 0 |....
sum12: 0 |....
sum13: 0 |....
sum14: 0 |....
sum15: 0 |....
sum16: 0 |....
Parallel influence running time: 0.193581 seconds on GeForce GTX 1080 Ti using OpenCL
I need some clarification on timer resolution. I'm trying to learn profiling in openCL. I have reduction algorithm implemented in OpenCL and want to measure the execution kernel time by getting the total elapsed time in the code given below. I ran this code on different devices and here are the results:
On CPU -- AMD FX 770K
Total time = 352,855,601
CL_DEVICE_PROFILING_TIMER_RESOLUTION = 69 ns
On GPU -- AMD Radeon R7 240
Total time = 172,297
CL_DEVICE_PROFILING_TIMER_RESOLUTION = 1 ns
On another GPU -- GeForce GT 610
Total time = 1,725,504
CL_DEVICE_PROFILING_TIMER_RESOLUTION = 1000 ns
The "Total time" given above is in actual nanoseconds? or I need to divide them by the time resolution to get the actual execution time? How the timer resolution can help us?
Here is a part of the code:
/* Enqueue kernel */
err = clEnqueueNDRangeKernel(queue, kernel[i], 1, NULL, &global_size,
&local_size, 0, NULL, &prof_event);
if (err < 0) {
perror("Couldn't enqueue the kernel");
exit(1);
}
/* Finish processing the queue and get profiling information */
clFinish(queue);
clGetEventProfilingInfo(prof_event, CL_PROFILING_COMMAND_START,
sizeof(time_start), &time_start, NULL);
clGetEventProfilingInfo(prof_event, CL_PROFILING_COMMAND_END,
sizeof(time_end), &time_end, NULL);
total_time = time_end - time_start;
printf("Total time = %lu\n\n", total_time);
The specification is pretty clear on this: "current device time counter in
nanoseconds"
The times are always in nanoseconds. The resolution query is so you can find out how accurate the data is. For example, given the measurements and resolutions you posted, you can deduce the the error margin of the measure:
AMD FX 770K:
Measured: 352,855,601 ± 69 ns
Actual: 352,855,532 - 352,855,670
AMD Radeon R7 240:
Measured: 172,297 ± 1 ns
Actual: 172,296 - 172,298
GeForce GT 610:
Measured: 1,725,504 ± 1000 ns
Actual: 1,724,504 - 1,726,504
I have below top command results in my RHEL 6. It's running PostgreSQL on my server.
I see 35.8% idle in CPU(s) while all the CPU usages below show 100%.
So how should I read below output?
top - 03:06:30 up 97 days, 20:15, 3 users, load average: 10.85, 10.51, 10.13
Tasks: 738 total, 14 running, 724 sleeping, 0 stopped, 0 zombie
**Cpu(s): 53.3%us, 9.6%sy, 0.0%ni, 35.8%id, 0.6%wa, 0.0%hi, 0.7%si, 0.0%st**
Mem: 32077620k total, 24335372k used, 7742248k free, 19084k buffers
Swap: 81919992k total, 407968k used, 81512024k free, 18686780k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
19171 enterpri 20 0 8590m 966m 951m R 100.0 3.1 6:24.51 edb-postgres
19588 enterpri 20 0 8590m 956m 941m R 100.0 3.1 1:20.51 edb-postgres
18494 enterpri 20 0 8590m 959m 944m R 99.8 3.1 18:18.75 edb-postgres
18683 enterpri 20 0 8588m 984m 975m R 99.8 3.1 6:22.80 edb-postgres
19158 enterpri 20 0 8592m 1.0g 1.0g R 99.8 3.3 5:40.16 edb-postgres
19167 enterpri 20 0 8589m 959m 945m R 99.8 3.1 7:48.53 edb-postgres
19590 enterpri 20 0 8586m 945m 933m R 99.8 3.0 2:51.32 edb-postgres
19591 enterpri 20 0 8588m 950m 936m R 99.8 3.0 3:07.77 edb-postgres
19592 enterpri 20 0 8589m 948m 935m R 99.8 3.0 2:52.66 edb-postgres
You have a lot of CPUs (how many?) on your system. Some of them are very busy running postgres, and some of them are not.
In your version of top, %CPU represents the percent of a single CPU, not the percent of the total system CPU. If you had a threaded application, one entry could show more than 100%, but PostgreSQL is not threaded within a single process.
I am new to Go and trying to figure out how it manages memory consumption.
I have trouble with memory in one of my test projects. I don't understand why Go uses more and more memory (never freeing it) when my program runs for a long time.
I am running the test case provided below. After the first allocation, program uses nearly 350 MB of memory (according to ActivityMonitor). Then I try to free it and ActivityMonitor shows that memory consumption doubles. Why?
I am running this code on OS X using Go 1.0.3.
What is wrong with this code? And what is the right way to manage large variables in Go programs?
I had another memory-management-related problem when implementing an algorithm that uses a lot of time and memory; after running it for some time it throws an "out of memory" exception.
package main
import ("fmt"
"time"
)
func main() {
fmt.Println("getting memory")
tmp := make([]uint32, 100000000)
for kk, _ := range tmp {
tmp[kk] = 0
}
time.Sleep(5 * time.Second)
fmt.Println("returning memory")
tmp = make([]uint32, 1)
tmp = nil
time.Sleep(5 * time.Second)
fmt.Println("getting memory")
tmp = make([]uint32, 100000000)
for kk, _ := range tmp {
tmp[kk] = 0
}
time.Sleep(5 * time.Second)
fmt.Println("returning memory")
tmp = make([]uint32, 1)
tmp = nil
time.Sleep(5 * time.Second)
return
}
Currently, go uses a mark-and-sweep garbage collector, which in general does not define when the object is thrown away.
However, if you look closely, there is a go routine called sysmon which essentially runs as long as your program does and calls the GC periodically:
// forcegcperiod is the maximum time in nanoseconds between garbage
// collections. If we go this long without a garbage collection, one
// is forced to run.
//
// This is a variable for testing purposes. It normally doesn't change.
var forcegcperiod int64 = 2 * 60 * 1e9
(...)
// If a heap span goes unused for 5 minutes after a garbage collection,
// we hand it back to the operating system.
scavengelimit := int64(5 * 60 * 1e9)
forcegcperiod determines the period after which the GC is called by force. scavengelimit determines when spans are returned to the operating system. Spans are a number of memory pages which can hold several objects. They're kept for scavengelimit time and are freed if no object is on them and scavengelimit is exceeded.
Further down in the code you can see that there is a trace option. You can use this to see, whenever the
scavenger thinks he needs to clean up:
$ GOGCTRACE=1 go run gc.go
gc1(1): 0+0+0 ms 0 -> 0 MB 423 -> 350 (424-74) objects 0 handoff
gc2(1): 0+0+0 ms 1 -> 0 MB 2664 -> 1437 (2880-1443) objects 0 handoff
gc3(1): 0+0+0 ms 1 -> 0 MB 4117 -> 2213 (5712-3499) objects 0 handoff
gc4(1): 0+0+0 ms 2 -> 1 MB 3128 -> 2257 (6761-4504) objects 0 handoff
gc5(1): 0+0+0 ms 2 -> 0 MB 8892 -> 2531 (13734-11203) objects 0 handoff
gc6(1): 0+0+0 ms 1 -> 1 MB 8715 -> 2689 (20173-17484) objects 0 handoff
gc7(1): 0+0+0 ms 2 -> 1 MB 5231 -> 2406 (22878-20472) objects 0 handoff
gc1(1): 0+0+0 ms 0 -> 0 MB 172 -> 137 (173-36) objects 0 handoff
getting memory
gc2(1): 0+0+0 ms 381 -> 381 MB 203 -> 202 (248-46) objects 0 handoff
returning memory
getting memory
returning memory
As you can see, no gc invoke is done between getting and returning. However, if you change
the delay from 5 seconds to 3 minutes (more than the 2 minutes from forcegcperiod),
the objects are removed by the gc:
returning memory
scvg0: inuse: 1, idle: 1, sys: 3, released: 0, consumed: 3 (MB)
scvg0: inuse: 381, idle: 0, sys: 382, released: 0, consumed: 382 (MB)
scvg1: inuse: 1, idle: 1, sys: 3, released: 0, consumed: 3 (MB)
scvg1: inuse: 381, idle: 0, sys: 382, released: 0, consumed: 382 (MB)
gc9(1): 1+0+0 ms 1 -> 1 MB 4485 -> 2562 (26531-23969) objects 0 handoff
gc10(1): 1+0+0 ms 1 -> 1 MB 2563 -> 2561 (26532-23971) objects 0 handoff
scvg2: GC forced // forcegc (2 minutes) exceeded
scvg2: inuse: 1, idle: 1, sys: 3, released: 0, consumed: 3 (MB)
gc3(1): 0+0+0 ms 381 -> 381 MB 206 -> 206 (252-46) objects 0 handoff
scvg2: GC forced
scvg2: inuse: 381, idle: 0, sys: 382, released: 0, consumed: 382 (MB)
getting memory
The memory is still not freed, but the GC marked the memory region as unused. Freeing will begin when
the used span is unused and older than limit. From scavenger code:
if(s->unusedsince != 0 && (now - s->unusedsince) > limit) {
// ...
runtime·SysUnused((void*)(s->start << PageShift), s->npages << PageShift);
}
This behavior may of course change over time, but I hope you now get a bit of a feel when objects
are thrown away by force and when not.
As pointed out by zupa, releasing objects may not return the memory to the operating system, so on
certain systems you may not see a change in memory usage. This seems to be the case for Plan 9
and Windows according to this thread on golang-nuts.
To eventually (force) collect unused memory you must call runtime.GC().
variable = nil may make things unreachable and thus eligible for collection, but it per se doesn't free anything.