RenderScript GPU performance not on par with device GFLOPS? - performance

As a test, I am trying to crunch as much GFLOPS from the GPU as possible, just to see how far we can go with compute via RenderScript.
For this I use a GPU-cache-friendly kernel that will (hopefully) not be bounded on memory access for testing purposes:
#pragma rs_fp_relaxed
rs_allocation input;
float __attribute__((kernel)) compute(float in, int x)
{
float sum = 0;
if (x < 64) return 0;
for (int i = 0; i < 64; i++) {
sum += rsGetElementAt_float(input, x - i);
}
return sum;
}
On the Java side I just call the kernel a couple of times:
for (int i = 0; i < 1024; i++) {
m_script.forEach_compute(m_inAllocation, m_outAllocation);
}
With allocation sizes of 1M floats this maxes around 1-2 GFLOPS on a GPU that should max around 100 GFLOPS (Snapdragon 600, APQ8064AB), that is 50x - 100x less compute performance !.
I have tried unrolling the loop (10% difference), using larger or smaller sums (<5% diff), different allocation sizes (<5% diff), 1D or 2D allocations (no diff), but come nowhere near the amount of GFLOPS that should be possible on the device. I even am thinking that the entire kernel is only running on the CPUs.
In similar sense, looking at the results of an RenderScript benchmark application (https://compubench.com/result.jsp?benchmark=compu20, the top of the line devices only achieve around 60M pixels/s on a Gaussian blur. A 5x5 blur in naive (non-seperable) implementation takes around 50 FLOPS/pixel, resulting in 3 GFLOPS as opposed to the 300 GFLOPS these GPUs have.
Any thoughts?
(see e.g. http://kyokojap.myweb.hinet.net/gpu_gflops/ for an overview of device capabilities)
EDIT:
Using the OpenCL libs that are available on the device (Samsung S4, 4.4.2) I have rewritten the RenderScript test program to OpenCL and run it via the NDK. With basically the same setup (1M float buffers and running the kernel 1024 times) I can now get around 25 GFLOPS, that is 10x the RenderScript performance, and 4x from the theoretical device maximum.
For RenderScript there is no way of knowing if a kernel is running on the GPU. So:
if the RenderScript kernel does run on the GPU, why is it so slow?
if the kernel is not running on the GPU, which devices do run RenderScript on the GPU (aside from most probably the Nexus line)?
Thanks.

What device are you using? Not all devices are shipping with GPU drivers yet.
Also, that kernel will be memory bound, since you've got a 1:1 arithmetic to load ratio.

Related

Analysing performance of transpose function

I've written a naive and an "optimized" transpose functions for order-3 tensors containing double-precision complex numbers and I would like to analyze their performance.
Approximate code for naive transpose function:
#pragma omp for schedule(static)
for (auto i2 = std::size_t(0); i2 < n2; ++i2)
{
for (auto i1 = std::size_t{}; i1 < n1; ++i1)
{
for (auto i3 = std::size_t{}; i3 < n3; ++i3)
{
tens_tr(i3, i2, i1) = tens(i1, i2, i3);
}
}
}
Approximate code for optimized transpose function (remainder loop not shown, assume divisibility):
#pragma omp for schedule(static)
for (auto i2 = std::size_t(0); i2 < n2; ++i2)
{
// blocked loop
for (auto bi1 = std::size_t{}; bi1 < n1; bi1 += block_size)
{
for (auto bi3 = std::size_t{}; bi3 < n3; bi3 += block_size)
{
for (auto i1 = std::size_t{}; i1 < block_size; ++i1)
{
for (auto i3 = std::size_t{}; i3 < block_size; ++i3)
{
cache_buffer[i3 * block_size + i1] = tens(bi1 + i1, i2, bi3 + i3);
}
}
for (auto i1 = std::size_t{}; i1 < block_size; ++i1)
{
for (auto i3 = std::size_t{}; i3 < block_size; ++i3)
{
tens_tr(bi3 + i1, i2, bi1 + i3) = cache_buffer[i1 * block_size + i3];
}
}
}
}
}
Assumption: I decided to use a streaming function as reference because I reasoned that the transpose function, in its perfect implementation, would closely resemble any bandwidth-saturating streaming function.
For this purpose, I chose the DAXPY loop as reference.
#pragma omp parallel for schedule(static)
for (auto i1 = std::size_t{}; i1 < tens_a_->get_n1(); ++i1)
{
auto* slice_a = reinterpret_cast<double*>(tens_a_->get_slice_data(i1));
auto* slice_b = reinterpret_cast<double*>(tens_b_->get_slice_data(i1));
const auto slice_size = 2 * tens_a_->get_slice_size(); // 2 doubles for a complex
#pragma omp simd safelen(8)
for (auto index = std::size_t{}; index < slice_size; ++index)
{
slice_b[index] += lambda_ * slice_a[index]; // fp_count: 2, traffic: 2+1
}
}
Also, I used a simple copy kernel as a second reference.
#pragma omp parallel for schedule(static)
for (auto i1 = std::size_t{}; i1 < tens_a_->get_n1(); ++i1)
{
const auto* op1_begin = reinterpret_cast<double*>(tens_a_->get_slice_data(index));
const auto* op1_end = op1_begin + 2 * tens_a_->get_slice_size(); // 2 doubles in a complex
auto* op2_iter = reinterpret_cast<double*>(tens_b_->get_slice_data(index));
#pragma omp simd safelen(8)
for (auto* iter = op1_begin; iter != op1_end; ++iter, ++op2_iter)
{
*op2_iter = *iter;
}
}
Hardware:
Intel(R) Xeon(X) Platinum 8168 (Skylake) with 24 cores # 2.70 GHz and L1, L2 and L3 caches sized 32 kB, 1 MB and 33 MB respectively.
Memory of 48 GiB # 2666 MHz. Intel Advisor's roof-line view says memory BW is 115 GB/s.
Benchmarking: 20 warm-up runs, 100 timed experiments, each with newly allocated data "touched" such that page-faults will not be measured.
Compiler and flags:
Intel compiler from OneAPI 2022.1.0, optimization flags -O3;-ffast-math;-march=native;-qopt-zmm-usage=high.
Results (sizes assumed to be adequately large):
Using 24 threads pinned on 24 cores (total size of both tensors ~10 GiB):
DAXPY 102 GB/s
Copy 101 GB/s
naive transpose 91 GB/s
optimized transpose 93 GB/s
Using 1 thread pinned on a single core (total size of both tensors ~10 GiB):
DAXPY 20 GB/s
Copy 20 GB/s
naive transpose 9.3 GB/s
optimized transpose 9.3 GB/s
Questions:
Why is my naive transpose function performing so well?
Why is the difference in performance between reference and transpose functions so high when using only 1 thread?
I'm glad to receive any kind of input for any of the above questions. Also, I will gladly provide additional information when required. Unfortunately, I cannot provide a minimum reproducer because of the size and complexity of each benchmark program. Thank you very much for you time and help in advance!
Updates:
Could it be that the Intel compiler performed loop-blocking for the naive transpose function as optimization?
Is the above-mentioned assumption valid? [asked before the edit]
Not really.
Transpositions of large arrays tends not to saturate the bandwidth of the RAM on some platforms. This can be due to cache effects like cache trashing. For more information about this, you can read this post for example. In your specific case, things works quite well though (see below).
On NUMA platforms, the data page distribution on NUMA nodes has can have a strong impact on the performance. This can be due to the (temporary) unbalanced page distribution, a non-uniform latency, a non-uniform throughput or even the (temporary) saturation of the RAM of some NUMA node. NUMA can be seen on recent AMD processors but also on some Intel ones (eg. since Skylake, see this post) regarding the system configuration.
Even assuming the above points do not apply in your case, considering the perfect case while the naive code may not behave like a perfect transposition can result in wrong interpretations. If this assumption is broken, results could overestimate the performance of the naive implementation for example.
Why is my naive transpose function performing so well?
A good throughput does not means the computation is fast. The computation can be slower with a higher throughput if more data needs to be transferred from the RAM. This is possible due to cache misses. More specifically, with a naive access pattern, cache lines can be replaced more frequently with a lower reuse (due to cache trashing) and thus the wall clock time should be higher. You need to measure the wall clock time. Metrics are good to understand what is going on but not to measure the performance of a kernel.
In this specific case, the chosen size (ie. 1050) should not cause too many conflict misses because it is not divisible by a large power of two. In the naive version, the tens_tr writes will fill many cache lines partially (1050) before they can be reused when i1 is increased (up to 8 subsequent increases are needed so to fill the cache lines). This means, 1050 * 64 ~= 66 KiB of cache is needed for the i1-i3-based transposition of one given i2 to complete. The cache lines cannot be reused with other i2 values so the cache do not need to be so huge for the transposition to be relatively efficient. That being said, one should also consider the tens reads (though it can be quite quickly evicted from the cache). In the end, the 16-way associative L2 cache of 1 MiB should be enough for that. Note that the naive implementation should perform poorly with significantly bigger arrays since the L2 cache should not be large enough so for cache lines to be fully reused (causing data to be reloaded many times from the memory hierarchy, typically from the L3 in sequential and the RAM in parallel). Also note that the naive transposition can also perform very poorly on processor with smaller caches (eg. x86-64 desktop processors except recent ones that often have bigger caches) or if you plan to change the size of the input array to something divisible by a large power of two.
While blocking enable a better use of the L1 cache, it is not so important in your specific case. Indeed, the naive computation does not benefit from the L1 cache but the effect is small since the transposition should be bounded by the L3 cache and the RAM anyway. That being said, a better L1 cache usage could help to reduce a bit the latency regarding the target processor architecture. You should see the effect mainly on significantly smaller arrays.
In parallel, the L3 cache is large enough so that the 24 cores can run in parallel without too many conflict misses. Even if the L3 performed poorly, the kernel would be mainly memory bound so the impact of the cache misses would be not much visible.
Why is the difference in performance between reference and transpose functions so high when using only 1 thread?
This is likely due to the latency of memory operations. Transpositions perform memory read/writes with huge strides and the hardware prefetchers may not be able to fully mitigate the huge latency of the L3 cache or the one of the main RAM. Indeed, the number of pending cache-line request per core is limited (to a dozen of them on Skylake), so the kernel is bound by the latency of the requests since there is not enough concurrency to fully overlap their latency.
For the DAXPY/copy, hardware prefetchers can better reduce the latency but the amount of concurrency is still too small compared to the latency on Xeon processor so to fully saturate the RAM with 1 thread. This is a quite reasonable architectural limitation since such processors are designed to execute applications scaling well on many cores.
With many threads, the per-core limitation vanishes and it is replaced by a stronger one: the practical RAM bandwidth.
Could it be that the Intel compiler performed loop-blocking for the naive transpose function as optimization?
This is theoretically possible since the Intel compiler (ICC) has such optimizer, but it is very unlikely for ICC to do that on a 3D transposition code (since it is a pretty complex relatively specific use-case). The best is to analyse the assembly code so to be sure.
Note on the efficiency of the optimized transposition
Due to the cache-line write allocation on x86-64 processors (like your Xeon processor), I expect the transposition to have a lower throughput assuming it do not take into account such effect. Indeed, the processor needs to read tens_tr cache lines so to fill them since it do not know if they will be completely filled ahead of time (it would be crazy for the naive transposition) and they may be evicted before (eg. during a context switch, by another running program).
There is several possible reasons to explain that:
The assumption is wrong and it means 1/3 of the bandwidth is wasted by reading cache lines meant to be actually written;
the DAXPY code also have the same issue and the reported maximum bandwidth is not really correct either (unlikely);
ICC succeed to rewrite the transposition so to use efficiently the caches and also generate non-temporal store instructions so to avoid this effect (unlikely).
Based on the possible reasons, I think the measured throughput already take into account write allocation and that the transposition implementation can be optimized further. Indeed, the optimized version doing the copy can use non-temporal store so to write the array back in memory without reading it. This is not possible with the naive implementation. With such optimization, the throughput may be the same, but the execution time can be about 33% lower (due to a better use of the memory bandwidth). This is a good example showing that the initial assumption is just wrong.

GPGPU threading strategy

I want to improve the performance of a compute shader.
Each thread group of the shader needs 8 blocks of data, each block has 24 elements.
I’m primarily optimizing for GeForce 1080Ti in my development PC and Tesla V100 in the production servers, but other people also run this code on their workstations, GPUs vary, not necessarily nVidia.
Which way is better:
[numthreads( 24, 1, 1 )], write a loop for( uint i = 0; i < 8; i++ )
This wastes 25% of execution units in each warp, but the memory access pattern is awesome. The VRAM reads of these 24 active threads are either coalesced, or full broadcasts.
[numthreads( 96, 1, 1 )], write a loop for( uint i = groupThreadID / 24; i < 8; i += 4 )
Looks better in terms of execution units utilization, however VRAM access pattern becomes worse because each warp is reading 2 slices of the input data.
Also I’m worried about synchronization penalty of GroupMemoryBarrierWithGroupSync() intrinsic, the group shared memory becomes split over 3 warps.
Also a bit harder to implement.

Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?

We've got a simple memory throughput benchmark. All it does is memcpy repeatedly for a large block of memory.
Looking at the results (compiled for 64-bit) on a few different machines, Skylake machines do significantly better than Broadwell-E, keeping OS (Win10-64), processor speed, and RAM speed (DDR4-2133) the same. We're not talking a few percentage points, but rather a factor of about 2. Skylake is configured dual-channel, and the results for Broadwell-E don't vary for dual/triple/quad-channel.
Any ideas why this might be happening? The code that follows is compiled in Release in VS2015, and reports average time to complete each memcpy at:
64-bit: 2.2ms for Skylake vs 4.5ms for Broadwell-E
32-bit: 2.2ms for Skylake vs 3.5ms for Broadwell-E.
We can get greater memory throughput on a quad-channel Broadwell-E build by utilizing multiple threads, and that's nice, but to see such a drastic difference for single-threaded memory access is frustrating. Any thoughts on why the difference is so pronounced?
We've also used various benchmarking software, and they validate what this simple example shows - single-threaded memory throughput is way better on Skylake.
#include <memory>
#include <Windows.h>
#include <iostream>
//Prevent the memcpy from being optimized out of the for loop
_declspec(noinline) void MemoryCopy(void *destinationMemoryBlock, void *sourceMemoryBlock, size_t size)
{
memcpy(destinationMemoryBlock, sourceMemoryBlock, size);
}
int main()
{
const int SIZE_OF_BLOCKS = 25000000;
const int NUMBER_ITERATIONS = 100;
void* sourceMemoryBlock = malloc(SIZE_OF_BLOCKS);
void* destinationMemoryBlock = malloc(SIZE_OF_BLOCKS);
LARGE_INTEGER Frequency;
QueryPerformanceFrequency(&Frequency);
while (true)
{
LONGLONG total = 0;
LONGLONG max = 0;
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
for (int i = 0; i < NUMBER_ITERATIONS; ++i)
{
QueryPerformanceCounter(&StartingTime);
MemoryCopy(destinationMemoryBlock, sourceMemoryBlock, SIZE_OF_BLOCKS);
QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
ElapsedMicroseconds.QuadPart *= 1000000;
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
total += ElapsedMicroseconds.QuadPart;
max = max(ElapsedMicroseconds.QuadPart, max);
}
std::cout << "Average is " << total*1.0 / NUMBER_ITERATIONS / 1000.0 << "ms" << std::endl;
std::cout << "Max is " << max / 1000.0 << "ms" << std::endl;
}
getchar();
}
Single-threaded memory bandwidth on modern CPUs is limited by max_concurrency / latency of the transfers from L1D to the rest of the system, not by DRAM-controller bottlenecks. Each core has 10 Line-Fill Buffers (LFBs) which track outstanding requests to/from L1D. (And 16 "superqueue" entries which track lines to/from L2).
(Update: experiments show that Skylake probably has 12 LFBs, up from 10 in Broadwell. e.g. Fig7 in the ZombieLoad paper, and other performance experiments including #BeeOnRope's testing of multiple store streams)
Intel's many-core chips have higher latency to L3 / memory than quad-core or dual-core desktop / laptop chips, so single-threaded memory bandwidth is actually much worse on a big Xeon, even though the max aggregate bandwidth with many threads is much better. They have many more hops on the ring bus that connects cores, memory controllers, and the System Agent (PCIe and so on).
SKX (Skylake-server / AVX512, including the i9 "high-end desktop" chips) is really bad for this: L3 / memory latency is significantly higher than for Broadwell-E / Broadwell-EP, so single-threaded bandwidth is even worse than on a Broadwell with a similar core count. (SKX uses a mesh instead of a ring bus because that scales better, see this for details on both. But apparently the constant factors are bad in the new design; maybe future generations will have better L3 bandwidth/latency for small / medium core counts. The private per-core L2 is bumped up to 1MiB though, so maybe L3 is intentionally slow to save power.)
(Skylake-client (SKL) like in the question, and later quad/hex-core desktop/laptop chips like Kaby Lake and Coffee Lake, still use the simpler ring-bus layout. Only the server chips changed. We don't yet know for sure what Ice Lake client will do.)
A quad or dual core chip only needs a couple threads (especially if the cores + uncore (L3) are clocked high) to saturate its memory bandwidth, and a Skylake with fast DDR4 dual channel has quite a lot of bandwidth.
For more about this, see the Latency-bound Platforms section of this answer about x86 memory bandwidth. (And read the other parts for memcpy/memset with SIMD loops vs. rep movs/rep stos, and NT stores vs. regular RFO stores, and more.)
Also related: What Every Programmer Should Know About Memory? (2017 update on what's still true and what's changed in that excellent article from 2007).
I finally got VTune (evalutation) up and running. It gives a DRAM bound score of .602 (between 0 and 1) on Broadwell-E and .324 on Skylake, with a huge part of the Broadwell-E delay coming from Memory Latency. Given that the memory sticks are the same speed (except dual-channel configured in Skylake and quad-channel in Broadwell-E), my best guess is that something about the memory controller in Skylake is just tremendously better.
It makes buying into the Broadwell-E architecture a much tougher call, and requires that you really need the extra cores to even consider it.
I also got L3/TLB miss counts. On Broadwell-E, TLB miss count was about 20% higher, and L3 miss count about 36% higher.
I don't think this is really an answer for "why" so I won't mark it as such, but is as close as I think I'll get to one for the time being. Thanks for all the helpful comments along the way.

Slow sorting using Thrust, CUDA

I am a newbie to CUDA. I simply tried to sort an array using Thrust.
clock_t start_time = clock();
thrust::host_vector<int> h_vec(10);
thrust::generate(h_vec.begin(), h_vec.end(), rand);
thrust::device_vector<int> d_vec = h_vec;
thrust::sort(d_vec.begin(), d_vec.end());
//thrust::sort(h_vec.begin(), h_vec.end());
clock_t stop_time = clock();
printf("%f\n", (double)(stop_time - start_time) / CLOCKS_PER_SEC);
Time took to sort d_vec is 7.4s, and time took to sort h_vec is 0.4s
I am assuming its parallel computation on device memory, so shouldn't it be faster ?
Probably the main problem is context creation time: the first CUDA call will initialize the CUDA context which takes some time, see here. Therefore you should start measuring time only after the first CUDA call.
In general you can only expect speed-up with GPU code compared to CPU code if the degree of parallelism is high enough. The vector size of 10 as in the example code is definitely too small to achieve speed-up. With a vector size >> 10000 you can expect to fully utilize a modern GPU.
You should also think about measuring only the time for sorting without the copy d_vec = h_vec, since often you will work with the device vector in the next step. Then you can consider the copy operation as a one time setup cost. (However if sorting is the only operation on device it is of course reasonable to include the memcopy in the measurement.)

Why is this simple OpenCL kernel running so slowly?

I'm looking into OpenCL, and I'm a little confused why this kernel is running so slowly, compared to how I would expect it to run. Here's the kernel:
__kernel void copy(
const __global char* pSrc,
__global __write_only char* pDst,
int length)
{
const int tid = get_global_id(0);
if(tid < length) {
pDst[tid] = pSrc[tid];
}
}
I've created the buffers in the following way:
char* out = new char[2048*2048];
cl::Buffer(
context,
CL_MEM_USE_HOST_PTR | CL_MEM_WRITE_ONLY,
length,
out);
Ditto for the input buffer, except that I've initialized the in pointer to random values. Finally, I run the kernel this way:
cl::Event event;
queue.enqueueNDRangeKernel(
kernel,
cl::NullRange,
cl::NDRange(length),
cl::NDRange(1),
NULL,
&event);
event.wait();
On average, the time is around 75 milliseconds, as calculated by:
cl_ulong startTime = event.getProfilingInfo<CL_PROFILING_COMMAND_START>();
cl_ulong endTime = event.getProfilingInfo<CL_PROFILING_COMMAND_END>();
std::cout << (endTime - startTime) * SECONDS_PER_NANO / SECONDS_PER_MILLI << "\n";
I'm running Windows 7, with an Intel i5-3450 chip (Sandy Bridge architecture). For comparison, the "direct" way of doing the copy takes less than 5 milliseconds. I don't think the event.getProfilingInfo includes the communication time between the host and device. Thoughts?
EDIT:
At the suggestion of ananthonline, I changed the kernel to use float4s instead of chars, and that dropped the average run time to about 50 millis. Still not as fast as I would have hoped, but an improvement. Thanks ananthonline!
I think your main problem is the 2048*2048 work groups you are using. The opencl drivers on your system have to manage a lot more overhead if you have this many single-item work groups. This would be especially bad if you were to execute this program using a gpu, because you would get a very low level of saturation of the hardware.
Optimization: call your kernel with larger work groups. You don't even have to change your existing kernel. see question: What should this size be? I have used 64 below as an example. 64 happens to be a decent number on most hardware.
cl::size_t myOptimalGroupSize = 64;
cl::Event event;
queue.enqueueNDRangeKernel(
kernel,
cl::NullRange,
cl::NDRange(length),
cl::NDRange(myOptimalGroupSize),
NULL,
&event);
event.wait();
You should also get your kernel to do more than copy a single value. I have given an answer to a similar question about global memory over here.
CPUs are very different from GPUs. Running this on an x86 CPU, the best way to achieve decent performance would be to use double16 (the largest data type) instead of char or float4 (as suggested by someone else).
In my little experience with OpenCL on CPU, I have never reached performance levels that I could get with an OpenMP parallelization.
The best way to do a copy in parallel with a CPU would be to divide the block to copy into a small number of large sub-block, and let each thread copy a sub-block.
The GPU approach is orthogonal: each thread participates in the copy of the same block.
This is because on GPUs, different thread can access contiguous memory regions efficicently (coalescing).
To do an efficient copy on CPU with OpenCL, use a loop inside your kernel to copy contiguous data. And then use a workgroup size not larger than the number of available cores.
I believe it is the cl::NDRange(1) which is telling the runtime to use single item work groups. This is not efficient. In the C API you can pass NULL for this to leave the work group size up to the runtime; there should be a way to do that in the C++ API as well (perhaps also just NULL). This should be faster on the CPU; it certainly will be on a GPU.

Resources