golang slice allocation performance - performance

I stumbled upon an interesting thing while checking performance of memory allocation in GO.
package main
import (
"fmt"
"time"
)
func main(){
const alloc int = 65536
now := time.Now()
loop := 50000
for i := 0; i<loop;i++{
sl := make([]byte, alloc)
i += len(sl) * 0
}
elpased := time.Since(now)
fmt.Printf("took %s to allocate %d bytes %d times", elpased, alloc, loop)
}
I am running this on a Core-i7 2600 with go version 1.6 64bit (also same results on 32bit) and 16GB of RAM (on WINDOWS 10)
so when alloc is 65536 (exactly 64K) it runs for 30 seconds (!!!!).
When alloc is 65535 it takes ~200ms.
Can someone explain this to me please?
I tried the same code at home with my core i7-920 # 3.8GHZ but it didn't show same results (both took around 200ms). Anyone has an idea what's going on?

Setting GOGC=off improved performance (down to less than 100ms). Why?
becaue of escape analysis. When you build with go build -gcflags -m the compiler prints whatever allocations escapes to heap. It really depends on your machine and GO compiler version but when the compiler decides that the allocation should move to heap it means 2 things:
1. the allocation will take longer (since "allocating" on the stack is just 1 cpu instruction)
2. the GC will have to clean up that memory later - costing more CPU time
for my machine, the allocation of 65536 bytes escapes to heap and 65535 doesn't.
that's why 1 bytes changed the whole proccess from 200ms to 30s. Amazing..

Note/Update 2021: as Tapir Liui notes in Go101 with this tweet:
As of Go 1.17, Go runtime will allocate the elements of slice x on stack if the compiler proves they are only used in the current goroutine and N <= 64KB:
var x = make([]byte, N)
And Go runtime will allocate the array y on stack if the compiler proves it is only used in the current goroutine and N <= 10MB:
var y [N]byte
Then how to allocated (the elements of) a slice which size is larger than 64KB but not larger than 10MB on stack (and the slice is only used in one goroutine)?
Just use the following way:
var y [N]byte
var x = y[:]
Considering stack allocation is faster than heap allocation, that would have a direct effect on your test, for alloc equals to 65536 and more.
Tapir adds:
In fact, we could allocate slices with arbitrary sum element sizes on stack.
const N = 500 * 1024 * 1024 // 500M
var v byte = 123
func createSlice() byte {
var s = []byte{N: 0}
for i := range s { s[i] = v }
return s[v]
}
Changing 500 to 512 make program crash.

the reason is very simple.
const alloc int = 65535
0x0000 00000 (example.go:8) TEXT "".main(SB), ABIInternal, $65784-0
const alloc int = 65536
0x0000 00000 (example.go:8) TEXT "".main(SB), ABIInternal, $248-0
the difference is where the slice are created.

Related

Understanding CL_DEVICE_MAX_WORK_GROUP_SIZE limit OpenCL?

I have little bit difficulty understanding max work group limit reported by OpenCL and how it affects the program.
So my program is reporting following thing,
CL_DEVICE_MAX_WORK_ITEM_SIZES : 1024, 1024, 1024
CL_DEVICE_MAX_WORK_GROUP_SIZE : 256
CL_DEVICE_MAX_WORK_ITEM_DIMENSIONS : 3
Now I am writing program to add vectors with 1 million entries.
So the calculation for globalSize and localSize for NDRange is as follows
int localSize = 64;
// Number of total work items - localSize must be devisor
globalSize = ceil(n/(float)localSize)*localSize;
.......
// Execute the kernel over the entire range of the data set
err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL, &globalSize, &localSize,
0, NULL, NULL);
Here as per my understanding OpenCL indirectly calculates the number of work groups it will launch.
For above example
globalSize = 15625 * 64 -> 1,000,000 -> So this is total number of threads that will be launched
localSize = 64 -> So each work group will have 64 work items
Hence from above we get
Total Work Groups Launched = globalSize/ localSize -> 15625 Work Groups
Here my confusion starts,
If you see value reported by OpenCL CL_DEVICE_MAX_WORK_GROUP_SIZE : 256 So, I was thinking this means max my device can launch 256 work groups in one dimension,
but above calculations showed that I am launching 15625 work groups.
So how is this thing working ?
I hope some one can clarify my confusion.
I am sure I am understanding something wrong.
Thanks in advance.
According to the specification of clEnqueueNDRangeKernel: https://www.khronos.org/registry/OpenCL/sdk/2.2/docs/man/html/clEnqueueNDRangeKernel.html,
CL_DEVICE_MAX_WORK_ITEM_SIZES and CL_DEVICE_MAX_WORK_GROUP_SIZE indicate the limits of local size (CL_​KERNEL_​WORK_​GROUP_​SIZE is CL_DEVICE_MAX_WORK_GROUP_SIZE in OpenCL 1.2).
const int dimension = n;
const int localSizeDim[n] = { ... }; // Each element must be less than or equal to 'CL_DEVICE_MAX_WORK_ITEM_SIZES[i]'
const int localSize = localSizeDim[0] * localSizeDim[1] * ... * localSizeDim[n-1]; // The size must be less than or equal to 'CL_DEVICE_MAX_WORK_GROUP_SIZ'
I couldn't find the device limit of global work items, but maximum value representable by size t is the limit of global work items in the description of the error CL_​INVALID_​GLOBAL_​WORK_​SIZE.

Disable array/slice bounds checking in Golang to improve performance

I'm writing a NES/Famicom emulator. I register a callback function that will be called every time a pixel is rendered. It means that my callback function will be called about 3.5 million times (256width * 240height * 60fps).
In my callback function, there are many array/slice operations, and I found that Go will do bounds checking every time I index an element in it. But the indexes are results of bit and operations so I can tell that it will NOT exceed both bounds.
So, I'm here to ask if there is a way to disable bounds checking?
Thank you.
Using gcflags you can disable bounds checking.
go build -gcflags=-B .
If you really need to avoid the bounds check, you can use the unsafe package and use C-style pointer arithmetic to perform your lookups:
index := 2
size := unsafe.Sizeof(YourStruct{})
p := unsafe.Pointer(&yourStructSlice[0])
indexp := (unsafe.Pointer)(uintptr(p) + size*uintptr(index))
yourStructPtr := (*YourStruct)(indexp)
https://play.golang.org/p/GDNphKsJPOv
You should time it to determine how much CPU run time you are actually saving by doing this, but it is probably true it is possible to make it faster using this approach.
Also, you may want to have a look at the actual generated instructions to make sure that what you outputting is actually more efficient. Doing lookups without bounds checks very well may be more trouble than it's worth. Some info on how to do that here: https://github.com/teh-cmc/go-internals/blob/master/chapter1_assembly_primer/README.md
Another common approach is to write performance critical code in assembly (see https://golang.org/doc/asm). Ain't no automatic bounds checking in asm :)
The XY Problem
The XY problem is asking about your attempted solution rather than
your actual problem.
Your real problem is overall performance. Let's see some benchmarks to show that bounds checking is a significant problem. It may not be a significant problem. For example, less than one millisecond per second,
Bounds check:
BenchmarkPixels-4 300 4034580 ns/op
No bounds check:
BenchmarkPixels-4 500 3150985 ns/op
bounds_test.go:
package main
import (
"testing"
)
const (
width = 256
height = 240
frames = 60
)
var pixels [width * height]byte
func writePixel(w, h int) {
pixels[w*height+h] = 42
}
func BenchmarkPixels(b *testing.B) {
for N := 0; N < b.N; N++ {
for f := 0; f < frames; f++ {
for w := 0; w < width; w++ {
for h := 0; h < height; h++ {
writePixel(w, h)
}
}
}
}
}

golang CPU usage

I am aware of [1]. With a few lines of code, I just want to extract the current CPU usage from the top n processes with the most CPU usages. More or less the top 5 rows of top. Using github.com/shirou/gopsutil/process this is straight-forward:
// file: gotop.go
package main
import (
"log"
"time"
"sort"
"github.com/shirou/gopsutil/process"
)
type ProcInfo struct{
Name string
Usage float64
}
type ByUsage []ProcInfo
func (a ByUsage) Len() int { return len(a) }
func (a ByUsage) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a ByUsage) Less(i, j int) bool {
return a[i].Usage > a[j].Usage
}
func main() {
for {
processes, _ := process.Processes()
var procinfos []ProcInfo
for _, p := range processes{
a, _ := p.CPUPercent()
n, _ := p.Name()
procinfos = append(procinfos, ProcInfo{n, a})
}
sort.Sort(ByUsage(procinfos))
for _, p := range procinfos[:5]{
log.Printf(" %s -> %f", p.Name, p.Usage)
}
time.Sleep(3 * time.Second)
}
}
While the refresh rate in this implementation gotop is 3 seconds like top does, gotop has approx. 5-times higher demand on CPU usage to get these values like top does. Is there any trick to more efficiently read the 5 topmost consuming processes? I also tried to find the implementation of top to see how this is implemented there.
Is psutils responsible for this slown-down? I found as cpustat implemented in GO as well. But even sudo ./cpustat -i 3000 -s 1 seems to be not as efficient as top.
The main motivation is to monitor the usage of the current machine with a fairly small amount of computational effort so that it can run as a service in the background.
It seems, even htop is only reading /proc/stat.
edit
as proposed in the comments here is the result when profiling
Showing top 10 nodes out of 46 (cum >= 70ms)
flat flat% sum% cum cum%
40ms 40.00% 40.00% 40ms 40.00% syscall.Syscall
10ms 10.00% 50.00% 30ms 30.00% github.com/shirou/gopsutil/process.(*Process).fillFromStatusWithContext
10ms 10.00% 60.00% 30ms 30.00% io/ioutil.ReadFile
10ms 10.00% 70.00% 10ms 10.00% runtime.slicebytetostring
10ms 10.00% 80.00% 20ms 20.00% strings.FieldsFunc
10ms 10.00% 90.00% 10ms 10.00% syscall.Syscall6
10ms 10.00% 100% 10ms 10.00% unicode.IsSpace
0 0% 100% 10ms 10.00% bytes.(*Buffer).ReadFrom
0 0% 100% 70ms 70.00% github.com/shirou/gopsutil/process.(*Process).CPUPercent
0 0% 100% 70ms 70.00% github.com/shirou/gopsutil/process.(*Process).CPUPercentWithContext
Seems like the syscall takes forever. A tree dump is here:
https://gist.github.com/PatWie/4fa528b7d7b1d0b5c1b665c056671477
This changes the question into:
- Is the syscall the issue?
- Are there any c-sources for the top program? I just found the implementation of htop
- Is there an easy fix? I consider to write it in c and just wrap it for go.
github.com/shirou/gopsutil/process uses ioutil.ReadFile which access the filesystem less efficiently than top. In particular, ReadFile:
calls Stat which adds an extra unnecessary Syscall.
uses os.Open instead of unix.Openat + os.NewFile which causes extra kernel time traversing /proc when resolving the path. os.NewFile is still a little inefficient since it always checks whether the file descriptor is non-blocking. This can be avoided by using the golang.org/x/sys/unix or syscall packages directly.
Retrieving process details under Linux is fairly inefficient in general (lots of filesystem scanning, marshalling text data). However, you can achieve similar performance to top with Go by fixing the filesystem access (as described above).

CUDA profiler reports inefficient global memory access

I have a simple CUDA kernel which I thought was accessing global memory efficiently. The Nvidia profiler however reports that I am performing inefficient global memory accesses. My kernel code is:
__global__ void update_particles_kernel
(
float4 *pos,
float4 *vel,
float4 *acc,
float dt,
int numParticles
)
{
int index = threadIdx.x + blockIdx.x * blockDim.x;
int offset = 0;
while(index + offset < numParticles)
{
vel[index + offset].x += dt*acc[index + offset].x; // line 247
vel[index + offset].y += dt*acc[index + offset].y;
vel[index + offset].z += dt*acc[index + offset].z;
pos[index + offset].x += dt*vel[index + offset].x; // line 251
pos[index + offset].y += dt*vel[index + offset].y;
pos[index + offset].z += dt*vel[index + offset].z;
offset += blockDim.x * gridDim.x;
}
In particular the profiler reports the following:
From the CUDA best practices guide it says:
"For devices of compute capability 2.x, the requirements can be summarized quite easily: the concurrent accesses of the threads of a warp will coalesce into a number of transactions equal to the number of cache lines necessary to service all of the threads of the warp. By default, all accesses are cached through L1, which as 128-byte lines. For scattered access patterns, to reduce overfetch, it can sometimes be useful to cache only in L2, which caches shorter 32-byte segments (see the CUDA C Programming Guide).
For devices of compute capability 3.x, accesses to global memory are cached only in L2; L1 is reserved for local memory accesses. Some devices of compute capability 3.5, 3.7, or 5.2 allow opt-in caching of globals in L1 as well."
Now in my kernel based on this information I would expect that 16 accesses would be required to service a 32 thread warp because float4 is 16 bytes and on my card (770m compute capability 3.0) reads from the L2 cache are performed in 32 bytes chunks (16 bytes * 32 threads / 32 bytes cache lines = 16 accesses). Indeed as you can see the profiler reports that I am doing 16 access. What I don't understand is why the profiler reports that the ideal access would involve 8 L2 transactions per access for line 247 and only 4 L2 transactions per access for the remaining lines. Can someone explain what I am missing here?
I have a simple CUDA kernel which I thought was accessing global memory efficiently. The Nvidia profiler however reports that I am performing inefficient global memory accesses.
To take one example, your float4 vel array is stored in memory like this:
0.x 0.y 0.z 0.w 1.x 1.y 1.z 1.w 2.x 2.y 2.z 2.w 3.x 3.y 3.z 3.w ...
^ ^ ^ ^ ...
thread0 thread1 thread2 thread3
So when you do this:
vel[index + offset].x += ...; // line 247
you are accessing (storing) at the locations (.x) that I have marked above. The gaps in between each ^ mark indicate an inefficient access pattern, which the profiler is pointing out. (It does not matter that in the very next line of code, you are storing to the .y locations.)
There are at least 2 solutions, one of which would be a classical AoS -> SoA reorganization of your data, with appropriate code adjustments. This is well documented (e.g. here on the cuda tag and elsewhere) in terms of what it means, and how to do it, so I will let you look that up.
The other typical solution is to load a float4 quantity per thread, when you need it, and store a float4 quantity per thread, when you need to. Your code can be trivially reworked to do this, which should give improved profiling results:
//preceding code need not change
while(index + offset < numParticles)
{
float4 my_vel = vel[index + offset];
float4 my_acc = acc[index + offset];
my_vel.x += dt*my_acc.x;
my_vel.y += dt*my_acc.y;
my_vel.z += dt*my_acc.z;
vel[index + offset] = my_vel;
float4 my_pos = pos[index + offset];
my_pos.x += dt*my_vel.x;
my_pos.y += dt*my_vel.y;
my_pos.z += dt*my_vel.z;
pos[index + offset] = my_pos;
offset += blockDim.x * gridDim.x;
}
Even though you might think that this code is "less efficient" than your code, because your code "appears" to be only loading and storing .x, .y, .z, whereas mine "appears" to also load and store .w, in fact there is essentially no difference, due to the way a GPU loads and stores to/from global memory. Although your code does not appear to touch .w, in the process of accessing the adjacent elements, the GPU will load the .w elements from global memory, and also (eventually) store the .w elements back to global memory.
What I don't understand is why the profiler reports that the ideal access would involve 8 L2 transactions per access for line 247
For line 247 in your original code, you are accessing one float quantity per thread for the load operation of acc.x, and one float quantity per thread for the load operation of vel.x. A float quantity per thread by itself should require 128 bytes for a warp, which is 4 32-byte L2 cachelines. Two loads together would require 8 L2 cacheline loads. This is the ideal case, which assumes that the quantities are packed together nicely (SoA). But that is not what you have (you have AoS).

Program execution taking almost same usertime on CPU as well as GPU?

The program for finding prime numbers using OpenCL 1.1 gave the following benchmarks :
Device : CPU
Realtime : approx. 3 sec
Usertime : approx. 32 sec
Device : GPU
Realtime - approx. 37 sec
Usertime - approx. 32 sec
Why is the usertime of execution by GPU not less than that of CPU? Is data/task parallelization not occuring?
System specifications :64-bit CentOS 5.3 system with two ATI Radeon 5970 graphics card + Intel Core i7 processor(12 cores)
Your kernel is rather inefficient, I have an adjusted one below for you to consider. As to why it runs better on a cpu device...
Using your algorithm, the work items take varying amounts of time to execute. They will take longer as the numbers tested grow larger. A work group on a gpu will not finish until all of its items are finished some of the hardware will be left idle until the last item is done. On a cpu, it behaves more like a loop iterating over the kernel items, so the difference in cycles needed to compute each item won't drastically affect the performance.
'A' is not used by the kernel. It should not be copied unless it is used. It looks like you wanted to test the A[i] rather then 'i' itself though.
I think the gpu would be much better at FFT-based prime calculations, or even a sieve algorithm.
{
int t;
int i = get_global_id(0);
int end = sqrt(i);
if(i%2){
B[i] = 0;
}else{
B[i] = 1; //assuming only that it should be non-zero
}
for ( t = 3; (t<=end)&&(B[i] > 0) ; t+=2 ) {
if ( i % t == 0 ) {
B[ i ] = 0;
}
}
}

Resources