Julia package load extremely slow in first run - performance

I'm using Julia 1.5.2 under Linux 5.4.0 and waited around 15 minutes for Pkg.add("DifferentialEquations"). Then I started the Kernel in Jupyter Notebook and ran the following code. It took terribly 1 minute to execute (the actual first time that I did this it took 225s).
t = time()
using Printf
using BenchmarkTools
using OrdinaryDiffEq
using Plots
tt = time() - t
#sprintf("It took %f seconds to import Printf, BenchmarkTools, OrdinaryDiffEq and Plots.", tt)
# It took 58.545894 seconds to import Printf, BenchmarkTools, OrdinaryDiffEq and Plots.
Finally, I done the same as above, but for each package. This is the summary:
Printf: 0.004755973815917969
BenchmarkTools: 0.06729602813720703
Plots: 19.99405598640442
OrdinaryDiffEq: 19.001102209091187
I know from here that Pkg was slow in the past, but I think that 15 minutes isn't a normal installing time at all. However, this is not my big problem.
I know that Julia needs to compile everything everytime the Kernel is started or some package is loaded. But it obviously is not a compilation time, it's a compilation eternity.
Can anyone figure out why this is so terribly slow? And, if it's normal, wouldn't it be better to provide precompiled packages to Pkg such as numpy and friends are in Python? Or at least compile forever in the first using?
Thank you!
My complete Platform Info:
Julia Version 1.5.2
Commit 539f3ce943 (2020-09-23 23:17 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core(TM) i3-6100U CPU # 2.30GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-9.0.1 (ORCJIT, skylake)

This problem is generally called latency or time-to-first-plot (TTFP) when referring to julia-lang. There are some discussions you can find when using these keywords.
A nice recent analysis of this problem is assessed in the article "Analyzing sources of compiler latency in Julia: method invalidations"
At the time of writing (end 2020, stable release v1.5.3), no general solution is available but strategies of massive precompilation of packages instead of JIT is discussed, with marginal success.

Related

Linux perf and MKL

I've been trying to profile our app (amd64 RHEL 7.6 built with GCC 5.3 and using MKL + OMP). I used perf record, but all I see is a small number of samples in the OMP library. Nothing in main() or below. This is with one 10 minute run and also another that only lasts a second or so.
Is MKL + OMP doing some non-standard threading that perf can't follow?
I'll try running the test and then separately running perf record -p.
Does anyone have experience with perf record and MKL? Maybe VTune will work better!
It seems that the problem was with -f(no-)omit-frame-pointer. I was building with -O3 -g3 and for some reason perf record failed to get the stacks. I thought that -g3 would inhibit -fomit-frame-pointer. Presumably MKL stil has the frame pointers, so perf could get its stack traces.

What can be done to lower UE4Editor startup time?

Status: the problem lowered, but compared to other users reports it persists.
I have moved to UE4.27.0 and the startup time lowered from 11 (v4.26.2) to 6 minutes! (the RAM usage lowered too!) But doesnt compare to the speed other ppl report "almost instantly"...
It is not compiling anything, not even shaders, it is like the 6th time I run it for one project.
Should I try to disable plugins? but Im new with UE and dont want to difficult my usage. Tho, for ex., I have nothing VR related to test so it could really be initially disabled.
HD READ SPEED? NO
I have tested moving UE4Editor whole engine path (100GB) to a 3xSSD(Stripes), but the UE4Editor startup time remained the same. My HD were it is too, is fast but not so fast as the 3xSSD.
CPU USAGE? MAY BE if it could use 4 cores could solve it?
UE4Editor startup uses A SINGLE CORE ONLY, i can confirm with htop and system monitor, it is possible to see only a single core being used 100% and it changes between the 4 cores, so only one is used at 100% per time.
I tested this command line parameter -USEALLAVAILABLECORES after the project URL for UE4Editor, but nothing changed. I read that option is ignored in some machines, so may be if I patch it's usage it could work on mine?
GPU? no?
a report about an integrated graphics card (weak one) says it doesnt interfere with the startup time.
LOG for UE4Editor v4.27.0 with the new biggest intervals ("..." means ommited log lines to make it easier to read; "!(interval in seconds)" is just to easy reading it (no ommitted lines here)):
[2021.09.15-23.38.20:677][ 0]LogHAL: Linux SourceCodeAccessSettings: NullSourceCodeAccessor
!22s
[2021.09.15-23.38.42:780][ 0]LogTcpMessaging: Initializing TcpMessaging bridge
[2021.09.15-23.38.42:782][ 0]LogUdpMessaging: Initializing bridge on interface 0.0.0.0:0 to multicast group 230.0.0.1:6666.
!16s
[2021.09.15-23.38.58:158][ 0]LogPython: Using Python 3.7.7
...
[2021.09.15-23.39.01:817][ 0]LogImageWrapper: Warning: PNG Warning: Duplicate iCCP chunk
!75s
[2021.09.15-23.40.16:951][ 0]SourceControl: Source control is disabled
...
[2021.09.15-23.40.26:867][ 0]LogAndroidPermission: UAndroidPermissionCallbackProxy::GetInstance
!16s
[2021.09.15-23.40.42:325][ 0]LogAudioCaptureCore: Display: No Audio Capture implementations found. Audio input will be silent.
...
[2021.09.15-23.41.08:207][ 0]LogInit: Transaction tracking system initialized
!9s
[2021.09.15-23.41.17:513][ 0]BlueprintLog: New page: Editor Load
!23s
[2021.09.15-23.41.40:396][ 0]LocalizationService: Localization service is disabled
...
[2021.09.15-23.41.45:457][ 0]MemoryProfiler: OnSessionChanged
!13s
[2021.09.15-23.41.58:497][ 0]LogCook: Display: CookSettings for Memory: MemoryMaxUsedVirtual 0MiB, MemoryMaxUsedPhysical 16384MiB, MemoryMinFreeVirtual 0MiB, MemoryMinFreePhysical 1024MiB
SPECS:
I'm using ubuntu 20.04.
My CPU is 4 cores 3.6GHz.
GeForce GT 710 1GB.
Related question but for older UE4: https://answers.unrealengine.com/questions/987852/view.html
Unreal Engine needs a high-end pc with a lot of RAM, fast SSD's, a good CPU and a medium graphic card. First of all there are always some shaders that needs to be compiled from the engine, and a lot of assets to be loaded in the startup time. As I can see you're on Linux you are probably using a self-compiled Unreal Engine version.... not the best thing to do for a newbie, because this may cause several problems on load time, startup, compiling and a lot of other stuff. If it's the first times you're using Unreal, try using it on Windows, it's all easier.

OpenCL clCreateContextFromType function results in memory leaks

I ran valgrind to one of my open-source OpenCL codes (https://github.com/fangq/mmc), and it detected a lot of memory leaks in the OpenCL host code. Most of those pointed back to the line where I created the context object using clCreateContextFromType.
I double checked all my OpenCL variables, command queues, kernels and programs, and made sure that they are all properly released, but still, when testing on sample programs, every call to the mmc_run_cl() function bumps up memory by 300MB-400MB and won't release at return.
you can reproduce the valgrind report by running the below commands in a terminal:
git clone https://github.com/fangq/mmc.git
cd mmc/src
make clean
make all
cd ../examples/validation
valgrind --show-leak-kinds=all --leak-check=full ../../src/bin/mmc -f cube2.inp -G 1 -s cube2 -n 1e4 -b 0 -D TP -M G -F bin
assuming you system has gcc/git/libOpenCL and valgrind installed. Change the -G 1 input to a different number if you want to run it on other OpenCL devices (add -L to list).
In the below table, I list the repeated count of each valgrind detected leaks on an NVIDIA GPU (TitanV) on a Linux box (Ubuntu 16.04) with the latest driver+cuda 9.
Again, most leaks are associated with the clCreateContextFromType line, which I assume some GPU memories are not released, but I did released all GPU resources at the end of the host code.
do you notice anything that I missed in my host code? your input is much appreciated
counts | error message
------------------------------------------------------------------------------------
380 ==27828== by 0x402C77: main (mmc.c:67)
Code: entry point to the below errors
64 ==27828== by 0x41CF02: mcx_list_gpu (mmc_cl_utils.c:135)
Code: OCL_ASSERT((clGetPlatformIDs(0, NULL, &numPlatforms)));
4 ==27828== by 0x41D032: mcx_list_gpu (mmc_cl_utils.c:154)
Code: context=clCreateContextFromType(cps,devtype[j],NULL,NULL,&status);
58 ==27828== by 0x41DF8A: mmc_run_cl (mmc_cl_host.c:111)
Code: entry point to the below errors
438 ==27828== by 0x41E006: mmc_run_cl (mmc_cl_host.c:124)
Code: OCL_ASSERT(((mcxcontext=clCreateContextFromType(cprops,CL_DEVICE_TYPE_ALL,...));
13 ==27828== by 0x41E238: mmc_run_cl (mmc_cl_host.c:144)
Code: OCL_ASSERT(((mcxqueue[i]=clCreateCommandQueue(mcxcontext,devices[i],prop,&status),status)));
1 ==27828== by 0x41E7A6: mmc_run_cl (mmc_cl_host.c:224)
Code: OCL_ASSERT(((gprogress[0]=clCreateBufferNV(mcxcontext,CL_MEM_READ_WRITE, NV_PIN, ...);
1 ==27828== by 0x41E7F9: mmc_run_cl (mmc_cl_host.c:225)
Code: progress = (cl_uint *)clEnqueueMapBuffer(mcxqueue[0], gprogress[0], CL_TRUE, ...);
10 ==27828== by 0x41EDFA: mmc_run_cl (mmc_cl_host.c:290)
Code: status=clBuildProgram(mcxprogram, 0, NULL, opt, NULL, NULL);
7 ==27828== by 0x41F95C: mmc_run_cl (mmc_cl_host.c:417)
Code: OCL_ASSERT((clEnqueueReadBuffer(mcxqueue[devid],greporter[devid],CL_TRUE,0,...));
Update [04/11/2020]:
Reading #doqtor's comment, I did the following test on 5 difference devices, 2 NVIDIA GPUs, 2 AMD GPUs and 1 Intel CPU. What he said was correct - the memory leak does not happen on the Intel OpenCL library, I also found that AMD OpenCL driver is fine too. The only problem is that NVIDIA OpenCL library seems to have a leak on both GPUs I tested (Titan V and RTX2080).
My test results are below. Memory/CPU profiling using psrecord introduced in this post.
I will open a new question and bounty on how to reduce this memory leak with NVIDIA OpenCL. If you have any experience in this, please share. will post the link below. thanks
I double checked all my OpenCL variables, command queues, kernels and
programs, and made sure that they are all properly released...
Well I still found one (tiny) memory leak in mmc code:
==15320== 8 bytes in 1 blocks are definitely lost in loss record 14 of 1,905
==15320== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==15320== by 0x128D48: mmc_run_cl (mmc_cl_host.c:137)
==15320== by 0x11E71E: main (mmc.c:67)
Memory allocated by greporter isn't freed. So that's to be fixed by you.
The rest are potential memory leaks in OpenCL library. They may or may not to be a memory leaks as for example the library may use custom memory allocators which valgrind does not recognizes or does some other tricks. There is a lot threads about that:
clGetPlatformIDs Memory Leak
https://software.intel.com/en-us/forums/opencl/topic/753786
https://github.com/KhronosGroup/OpenCL-ICD-Loader/issues/13
OpenCL clGetPlatformIDs gives around 230 valgrind memcheck errors
In general you can't do much about that unless you want to dive into the library code and do something about that.
I would suggest to carefully suppress those reported which are coming from the library. The suppression file can be generated as described in the valgrind manual: https://valgrind.org/docs/manual/manual-core.html#manual-core.suppress
... but still, when testing on sample programs, every call to the
mmc_run_cl() function bumps up memory by 300MB-400MB and won't release
at return
How did you checked that? I haven't seen memory suspiciously growing. I set -n 1000e4 and it made it to run for like 2 minutes where the memory allocated stayed still for all the time at ~0.6% of my RAM size. Note that I didn't use nvidia CUDA but POCL on Intel GPU and CPU and linked with libOpenCL installed from ocl-icd-libopencl1:amd64 package on Ubuntu 18.04. So you may try to give that a go and check if that changes anything.
======== Update ================================
I've re-run it as you described in the comment and after first iteration the memory usage was 0.6% then after 2nd iteration it increased to 0.9% and after that the next iterations didn't increase memory usage. Valgrind also didn't report anything newer besides what I observed earlier. So I would suggest to link with different than nvidia-cuda libOpenCL and retest.

Unpredictable CUDNN_STATUS_NOT_INITIALIZED on Windows

I am running keras neural network training and prediction on GTX 1070 on Windows 10. Most times it is working, but from time to time it complains
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:359] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:366] error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows
E c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\stream_executor\cuda\cuda_dnn.cc:326] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
F c:\tf_jenkins\home\workspace\release-win\device\gpu\os\windows\tensorflow\core\kernels\conv_ops.cc:659] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
It cannot be explained neither by literally error meaning nor by OOM error.
How to fix?
Try limiting your gpu usage with set gpu option per_process_gpu_memory_fraction.
Fiddle around with it to see what works and what doesn't.
I recommend using .7 as a starting baseline.
I met the problem sometimes on Windows10 and Keras.
Reboot solve the problem for a short time, but happen again.
I refer to https://github.com/fchollet/keras/issues/1538
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.3
set_session(tf.Session(config=config))
the settings solve the halt problem.
Got the solution for this problem.
I had the same problem on Windows 10 with Nvidia GEforce 920M.
Search for the correct version of cudnn library. If the version is not compatable with the CUDA version it won't throw the error while tensorflow installation but will interfere during memory allocation in the GPU.
DO check your CUDA and CUDNN versions. Also follow the instructions about creation of sessions mentioned above.
Finally the issue is now resolved for me, I spent many hours struggling with this.
I recommend follow all the steps of installation properly as mentioned in
links
TensorFlow-
https://www.tensorflow.org/install/install_windows
and for CuDNN -
https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-windows
for me this wasn't enough, I tried updating my GeForce Game Ready Driver from GeForce Experience window, and after restart it started working for me.
GeForce Experience
the driver can also be downloaded from link https://www.geforce.com/drivers
Similar to what other people are saying, enabling memory growth for your GPUs can resolve this issue.
The following works for me by adding to the beginning of the training script:
# Using Tensorflow-2.4.x
import tensorflow as tf
try:
tf_gpus = tf.config.list_physical_devices('GPU')
for gpu in tf_gpus:
tf.config.experimental.set_memory_growth(gpu, True)
except:
pass
the tf doku help me a lot Allowing GPU memory growth
The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations: it starts out allocating very little memory, and as Sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process. Note that we do not release memory, since that can lead to even worse memory fragmentation. To turn this option on, set the option in the ConfigProto by:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
or
with tf.Session(graph=graph_node, config=config) as sess:
...
The second method is the per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. For example, you can tell TensorFlow to only allocate 40% of the total memory of each GPU by:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)

How can I get the LLVM IR dump from XLA in TensorFlow?

I am trying to get the LLVM IR generated by the XLA Compiler in TensorFlow. I know that the entire LLVM Context is contained in the llvm_module object. This is then converted to a string with the utility function llvm_ir::DumpModuleToString(*llvm_module) function in the Compile() function in the file: //tensorflow/compiler/xla/service/cpu.cpu_compiler.cc.
But I have been trying to log it using VLOG(2) from tensorflow/core/logging.h. No logs are shown. However, the remaining VLOG(2) statements from other files are logged in my Python run.
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
2017-03-10 22:36:43.226843: I tensorflow/compiler/xla/service/platform_util.cc:58] platform Host present with 8 visible devices
2017-03-10 22:36:43.227931: I tensorflow/compiler/xla/service/service.cc:183] XLA service 0x2821510 executing computations on platform Host. Devices:
2017-03-10 22:36:43.227951: I tensorflow/compiler/xla/service/service.cc:191] StreamExecutor device (0): <undefined>, <undefined>
b'Hello, TensorFlow!'
[FYI I can't leave comments, since I just joined and apparently don't have a reputation yet.]
First off, make sure to read this, including the starred blue boxes. In particular note that turning on XLA for your whole session only performs JIT for GPU, and not CPU at the moment.
https://www.tensorflow.org/performance/xla/jit
Now let's assume you've got everything set up correctly. The program in your example won't use XLA to compile for 2 reasons:
As #mrry has noted, XLA doesn't handle strings.
Even if you replaced the string with a number, you still wouldn't see any IR dump, because it's just a single constant, and XLA will have constant-folded it away.
In the comments you mentioned running on mnist_softmax, presumably following the instructions on the link above. If you're indeed compiling and running on CPU, the only remaining issue is using VLOG(2). VLOG is only enabled if you set command-line flags to turn it on.
So try replacing your VLOG(2) with LOG(INFO), and you should see the IR dump in your logs.

Resources