I use boost::object_pool in my program, but I found some problems, it can not quit.
Below is the code. Don't suggest me use boost::pool. The boost::pool is no problem, just discuss the boost::object_pool. Could anybody help me?
#include <iostream>
#include <boost/pool/object_pool.hpp>
int main(void) {
boost::object_pool<int> p;
int count = 1000*1000;
int** pv = new int*[count];
for (int i = 0; i < count; i++)
pv[i] = p.construct();
for (int i = 0; i < count; i++)
p.destroy(pv[i]);
delete [] pv;
return 0;
}
This program can not quit normally. Why?
On my machine, this program works correctly, if very slowly.
The loop calling "destroy" is very slow; it appears to have a O(N^2) bit in it; at least, for each 10x increase in the size of the loops, the run time increases by a factor of 90.
Here are some timings:
1000 elements 0.021 sec
10000 elements 1.219 sec
100000 elements 103.29 secs (1m43.29s)
1000000 elements 13437 secs (223m57s)
Someone beat me too it - just saw this question via the boost mailing list.
According to the docs destroy is O(N) so certainly calling this N times isn't ideal for large N - however I imagine that using the Object Pool destructor itself (which calls the destructor for each allocated object) which is itself O(N) would help a lot with bulk deletions).
I did have a graph showing timings on my machine - but since I haven't used Stack Overflow much I can't post it - ah well it doesn't show that much ...
I've published a fix derived from a merge-sort in boost sandbox:
https://github.com/graehl/boost/tree/object_pool-constant-time-free
or, standalone:
https://github.com/graehl/Pool-object_pool
Related
I'm a learning Cuda student, and I would like to optimize the execution time of my kernel function. As a result, I realized a short program computing the difference between two pictures. So I compared the execution time between a classic CPU execution in C, and a GPU execution in Cuda C.
Here you can find the code I'm talking about:
int *imgresult_data = (int *) malloc(width*height*sizeof(int));
int size = width*height;
switch(computing_type)
{
case GPU:
HANDLE_ERROR(cudaMalloc((void**)&dev_data1, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data2, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data_res, size*sizeof(int)));
HANDLE_ERROR(cudaMemcpy(dev_data1, img1_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data2, img2_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data_res, imgresult_data, size*sizeof(int), cudaMemcpyHostToDevice));
float time;
cudaEvent_t start, stop;
HANDLE_ERROR( cudaEventCreate(&start) );
HANDLE_ERROR( cudaEventCreate(&stop) );
HANDLE_ERROR( cudaEventRecord(start, 0) );
for(int m = 0; m < nb_loops ; m++)
{
diff<<<height, width>>>(dev_data1, dev_data2, dev_data_res);
}
HANDLE_ERROR( cudaEventRecord(stop, 0) );
HANDLE_ERROR( cudaEventSynchronize(stop) );
HANDLE_ERROR( cudaEventElapsedTime(&time, start, stop) );
HANDLE_ERROR(cudaMemcpy(imgresult_data, dev_data_res, size*sizeof(int), cudaMemcpyDeviceToHost));
printf("Time to generate: %4.4f ms \n", time/nb_loops);
break;
case CPU:
clock_t begin = clock(), diff;
for (int z=0; z<nb_loops; z++)
{
// Apply the difference between 2 images
for (int i = 0; i < height; i++)
{
tmp = i*imgresult_pitch;
for (int j = 0; j < width; j++)
{
imgresult_data[j + tmp] = (int) img2_data[j + tmp] - (int) img1_data[j + tmp];
}
}
}
diff = clock() - begin;
float msec = diff*1000/CLOCKS_PER_SEC;
msec = msec/nb_loops;
printf("Time taken %4.4f milliseconds", msec);
break;
}
And here is my kernel function:
__global__ void diff(unsigned char *data1 ,unsigned char *data2, int *data_res)
{
int row = blockIdx.x;
int col = threadIdx.x;
int v = col + row*blockDim.x;
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
}
I obtained these execution time for each one
CPU: 1,3210ms
GPU: 0,3229ms
I wonder why GPU result is not as lower as it should be. I am a beginner in Cuda so please be comprehensive if there are some classic errors.
EDIT1:
Thank you for your feedback. I tried to delete the 'if' condition from the kernel but it didn't change deeply my program execution time.
However, after having install Cuda profiler, it told me that my threads weren't running concurrently. I don't understand why I have this kind of message, but it seems true because I only have a 5 or 6 times faster application with GPU than with CPU. This ratio should be greater, because each thread is supposed to process one pixel concurrently to all the other ones. If you have an idea of what I am doing wrong, it would be hepful...
Flow.
Here are two things you could do which may improve the performance of your diff kernel:
1. Let each thread do more work
In your kernel, each thread handles just a single element; but having a thread do anything already has a bunch of overhead, at the block and the thread level, including obtaining the parameters, checking the condition and doing address arithmetic. Now, you could say "Oh, but the reads and writes take much more time then that; this overhead is negligible" - but you would be ignoring the fact, that the latency of these reads and writes is hidden by the presence of many other warps which may be scheduled to do their work.
So, let each thread process more than a single element. Say, 4, as each thread can easily read 4 bytes at once into a register. Or even 8 or 16; experiment with it. Of course you'll need to adjust your grid and block parameters accordingly.
2. "Restrict" your pointers
__restrict is not part of C++, but it is supported in CUDA. It tells the compiler that accesses through different pointers passed to the function never overlap. See:
What does the restrict keyword mean in C++?
Realistic usage of the C99 'restrict' keyword?
Using it allows the CUDA compiler to apply additional optimizations, e.g. loading or storing data via non-coherent cache. Indeed, this happens with your kernel although I haven't measured the effects.
3. Consider using a "SIMD" instruction
CUDA offers this intrinsic:
__device__ unsigned int __vsubss4 ( unsigned int a, unsigned int b )
Which subtracts each signed byte value in a from its corresponding one in b. If you can "live" with the result, rather than expecting a larger int variable, that could save you some of work - and go very well with increasing the number of elements per thread. In fact, it might let you increase it even further to get to the optimum.
I don't think you are measuring times correctly, memory copy is a time consuming step in GPU that you should take into account when measuring your time.
I see some details that you can test:
I suppose you are using MAX_H and MAX_H as constants, you may consider doing so using cudaMemcpyToSymbol().
Remember to sync your threads using __syncthreads(), so you don't get issues between each loop iteration.
CUDA works with warps, so block and number of threads per block work better as multiples of 8, but not larger than 512 threads per block unless your hardware supports it. Here is an example using 128 threads per block: <<<(cols*rows+127)/128,128>>>.
Remember as well to free your allocated memory in GPU and destroying your time events created.
In your kernel function you can have a single variable int v = threadIdx.x + blockIdx.x * blockDim.x .
Have you tested, beside the execution time, that your result is correct? I think you should use cudaMallocPitch() and cudaMemcpy2D() while working with arrays due to padding.
Probably there are other issues with the code, but here's what I see. The following lines in __global__ void diff are considered not optimal:
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
Conditional operators inside a kernel result in warp divergence. It means that if and else parts inside a warp are executed in sequence, not in parallel. Also, as you might have realized, if evaluates to false only at borders. To avoid the divergence and needless computation, split your image in two parts:
Central part where row < MAX_H && col < MAX_W is always true. Create an additional kernel for this area. if is unnecessary here.
Border areas that will use your diff kernel.
Obviously you'll have modify your code that calls the kernels.
And on a separate note:
GPU has throughput-oriented architecture, but not latency-oriented as CPU. It means CPU may be faster then CUDA when it comes to processing small amounts of data. Have you tried using large data sets?
CUDA Profiler is a very handy tool that will tell you're not optimal in the code.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
This is my code in c++. I have used c++11. is used to measure time in microseconds. My merge sort takes about 24 seconds to sort a randomly generated number array of size of 10 million. But when i refer to my friend's results they have got like 3 seconds. My code seems correct and the difference of mine and them is that they have used the clock in to measure time instead of chrono. Will this affect the deviation of my result? Please answer!
This is my code:
#include <iostream>
#include <climits>
#include<cstdlib>
#include<ctime>
#include<chrono>
using namespace std;
void merge_sort(long* inputArray,long low,long high);
void merge(long* inputArray,long low,long high,long mid);
int main(){
srand(time(NULL));
int n=1000;
long* inputArray = new long[n];
for (long i=0;i<n;i++){ //initialize the arrays of size n with random n numbers
inputArray[i]=rand(); //Generate a random number
}
auto Start = std::chrono::high_resolution_clock::now(); //Set the start time for insertion_sort
merge_sort(inputArray,0,n); //calling the insertion_sort to sort the array of size n
auto End = std::chrono::high_resolution_clock::now(); //Set the end time for insertion_sort
cout<<endl<<endl<<"Time taken for Merge Sort = "<<std::chrono::duration_cast<std::chrono::microseconds>(End-Start).count()<<" microseconds"; //Display the time taken for insertion sort
delete []inputArray;
return 0;
}
void merge_sort(long* inputArray,long low,long high){
if (low<high){
int mid =(low+high)/2;
merge_sort(inputArray,low,mid);
merge_sort(inputArray,mid+1,high);
merge(inputArray,low,mid,high);
}
return;
}
void merge(long* inputArray,long low,long mid,long high){
long n1 = mid-low+1;
long n2 = high - mid;
long *L= new long [n1+1];
long *R=new long [n2+1];
for (int i=0;i<=n1;i++){
L[i] = inputArray[low+i];
}
for (int j=0;j<=n2;j++){
R[j] = inputArray[mid+j+1];
}
L[n1]=INT_MAX ;
R[n2]=INT_MAX;
long i=0;
long j=0;
for (long k=low;k<=high;k++){
if (L[i] <= R[j] ){
inputArray[k]=L[i];
i=i+1;
}
else{
inputArray[k]=R[j];
j=j+1;
}
}
delete[] L;
delete[] R;
}
No way two time measurements took 20 seconds.
As others pointed out, results would be really dependent on platform, compiler optimization (debug mode can be really slower than release mode) and so on.
If you have the same setting as your friends and still have performance issue, you may want to use a profiler to see where your code is spending time. You can use that tool if you are in linux otherwise visual studio in windows is a good candidate.
I am writing some code that would definitively benefit from trying to integrate openmp some software that I am writing. I am new to openmp, and while testing some very basic test code (see below) I noticed that the execution times are extremely longer with openmp activated (#pragma line). Any insight is much appreciated.
int main()
{
int number=200;
int max = 2000000;
for(int t=1; t<max; t++)
{
double fac = 0.0;
#pragma omp parallel for reduction(+:fac)
for(int n=2; n<=number; n++)
fac += 1;
}
return 0;
}
As currently written the code encounters the parallel region max times. The overhead of entering a parallel region in an OpenMP program is small, but you incur it 2000000 times. You don't actually tell us what the run times are, but I can readily believe that this makes the them extremely longer than the serial version. I suggest you wrap the outer loop in a parallel region, not the inner loop.
Take care when you rewrite your code to ensure that the payload inside the parallel region is significant, and returns some value(s) to the program outside the parallel region. Absent these steps a crafty optimising compiler can determine that a loop returns nothing to the rest of the program and simply optimise it away.
Also insert some timing instructions (use omp_get_wtime), rerun your code and, if matters are still not satisfactory, update your question with the new information you gather.
This is an improved code that actually works as intended. It basically wraps the outer loop, rather than the inner one. When compiled without openmp support it takes 1.49s, with openmp 0.48s.
int main()
{
int number=200;
int max = 2000000;
#pragma omp parallel for
for(int t=1; t<max; t++)
{
double fac = 0.0;
for(int n=2; n<=number; n++)
fac += 1;
}
return 0;
}
We have been experimenting with different histogramming algorithms on a CUDA GPU. Most of the results I can explain, but we noticed some really weird features of which I have no clue what is causing them.
Kernels
The weird stuff happens in a data-parallel implementation. This means that the data is distributed over the threads. Each thread looks at a subset (ideally just 1) of the data, and adds its contribution to a histogram in global memory, which requires atomic operations.
__global__ void histogram1(float *data, uint *hist, uint n, float xMin, float binWidth, uin\
t nBins)
{
uint const nThreads = blockDim.x * gridDim.x;
uint const tid = threadIdx.x + blockIdx.x * blockDim.x;
uint idx = tid;
while (idx < n)
{
float x = data[idx];
uint bin = (x - xMin) / binWidth;
atomicAdd(hist + bin, 1);
idx += nThreads;
}
}
As a first optimization, each block first constructs a partial histogram in shared memory before doing a reduction of partial histograms to obtain the final result in global memory. The code is pretty straightforward, and I believe that it's very similar to that used in Cuda By Example.
__global__ void histogram2(float *data, uint *hist, uint n,
float xMin, float binWidth, uint nBins)
{
extern __shared__ uint partialHist[]; // size = nBins * sizeof(uint)
uint const nThreads = blockDim.x * gridDim.x;
uint const tid = threadIdx.x + blockIdx.x * blockDim.x;
// initialize shared memory to 0
uint idx = threadIdx.x;
while (idx < nBins)
{
partialHist[idx] = 0;
idx += blockDim.x;
}
__syncthreads();
// Calculate partial histogram (in shared mem)
idx = tid;
while (idx < n)
{
float x = data[idx];
uint bin = (x - xMin) / binWidth;
atomicAdd(partialHist + bin, 1);
idx += nThreads;
}
__syncthreads();
// Compute resulting total (global) histogram
idx = threadIdx.x;
while (idx < nBins)
{
atomicAdd(hist + idx, partialHist[idx]);
idx += blockDim.x;
}
}
Results
Speedup vs n
We benchmarked these two kernels to see how they behave as a function of n, which is the number of datapoints. The data was uniform randomly distributed. In the figure below, HIST_DP_1 is the unoptimized trivial version, whereas HIST_DP_2 is the one using shared memory to speed things up:
The timings have been taken relative to the CPU performance, and the weird stuff happens for very large datasets. The optimizing function, instead of flattening out like the unoptimized version, starts to improve again (relatively). We'd expect that for large datasets, the occupancy of our card will be near 100%, which would mean that from that point on the performance would scale linearly, like the CPU (and indeed the unoptimized blue curve).
The behavior could be due to the fact that the chance of having two threads performing an atomic operation on the same bin in shared/global memory going to zero for large data-sets, but in that case we would expect the drop to be in different places for different nBins. This is not what we observe, the drop is in all three panels at around 10^7 bins. What is happening here? Some complicated caching effect? Or is it something obvious that we missed?
Speedup vs nBins
To have a closer look at the behavior as a function of the number of bins, we fixed our dataset at 10^4 (10^5 in one case), and ran the algorithms for many different bin-numbers.
As a reference we also generated some non-random data. The red graph shows the results for perfectly sorted data, whereas the light-blue line corresponds to a dataset in which every value was identical (maximal congestion in the atomic operations). The question is obvious: what is the discontinuity doing there?
System Setup
NVidia Tesla M2075, driver 319.37
Cuda 5.5
Intel(R) Xeon(R) CPU E5-2603 0 # 1.80GHz
Thanks for your help!
EDIT: Reproduction Case
As requested: a compiling, runnable reproduction case. The code is quite long, which is why I didn't include it in the first place. The snippet is available on snipplr. To make your life even more easy, I'll include a little shell-script to run it for the same settings I used, and an Octave script to produce the plots.
Shell script
#!/bin/bash
runs=100
# format: [n] [nBins] [t_cpu] [t_gpu1] [t_gpu2]
for nBins in 100 1000 10000
do
for n in 10 50 100 200 500 1000 2000 5000 10000 50000 100000 500000 1000000 10000000 100000000
do
echo -n "$n $nBins "
./repro $n $nBins $runs
done
done
Octave script
T = load('repro.txt');
bins = unique(T(:,2));
t = cell(1, numel(bins));
for i = 1:numel(bins)
t{i} = T(T(:,2) == bins(i), :);
subplot(2, numel(bins), i);
loglog(t{i}(:,1), t{i}(:,3:5))
title(sprintf("nBins = %d", bins(i)));
legend("cpu", "gpu1", "gpu2");
subplot(2, numel(bins), i + numel(bins));
loglog(t{i}(:,1), t{i}(:,4)./t{i}(:,3), ...
t{i}(:,1), t{i}(:,5)./t{i}(:,3));
title("relative");
legend("gpu1/cpu", "gpu2/cpu");
end
Absolute Timings
Absolute timings show that it's not the CPU slowing down. Instead, the GPU is speeding up relatively:
Regarding question 1:
This is not what we observe, the drop is in all three panels at around 10^7 bins. What is happening here? Some complicated caching effect? Or is it something obvious that we missed?
This drop is due to the limit you've set on the maximum number of blocks (1<<14 == 16384). At n=10^7 gpuBench2 the limit has kicked in, and each thread starts processing multiple elements. At n=10^8 each thread works on 12 (sometimes 11) elements. If you remove this cap you can see that your performance continues to flatline.
Why is this faster? Multiple elements per thread allows for latency of the load from data to be hidden much better, especially in the case with 10000 bins where you are only able to fit one block on to each SM due to the high shared memory usage. In this case, every element in the block will reach the global load at around the same time, and none will be able to continue until it has completed its load. By having multiple elements we can pipeline these loads, getting many elements per thread for the latency of one.
(You don't see this in gupBench1 as it is not latency bound, but bandwidth bound to L2. You can see this very quickly if you import the output of nvprof into the visual profiler)
Regarding question 2:
The question is obvious: what is the discontinuity doing there?
I don't have a Fermi to hand, and I can't reproduce this on my Kepler, so I'd assume it's something that is Fermi specific. That's the danger of answering questions with two parts, I suppose!
I am benchmarking some R statements (see details here) and found that my elapsed time is way longer than my user time.
user system elapsed
7.910 7.750 53.916
Could someone help me to understand what factors (R or hardware) determine the difference between user time and elapsed time, and how I can improve it? In case it helps: I am running data.table data manipulation on a Macbook Air 1.7Ghz i5 with 4GB RAM.
Update: My crude understanding is that user time is what it takes my CPU to process my job. elapsed time is the length from I submit a job until I get the data back. What else did my computer need to do after processing for 8 seconds?
Update: as suggested in the comment, I run a couple times on two data.table: Y, with 104 columns (sorry, I add more columns as time goes by), and X as a subset of Y with only 3 keys. Below are the updates. Please note that I ran these two procedures consecutively, so the memory state should be similar.
X<- Y[, list(Year, MemberID, Month)]
system.time(
{X[ , Month:= -Month]
setkey(X,Year, MemberID, Month)
X[,Month:=-Month]}
)
user system elapsed
3.490 0.031 3.519
system.time(
{Y[ , Month:= -Month]
setkey(Y,Year, MemberID, Month)
Y[,Month:=-Month]}
)
user system elapsed
8.444 5.564 36.284
Here are the size of the only two objects in my workspace (commas added). :
object.size(X)
83,237,624 bytes
object.size(Y)
2,449,521,080 bytes
Thank you
User time is how many seconds the computer spent doing your calculations. System time is how much time the operating system spent responding to your program's requests. Elapsed time is the sum of those two, plus whatever "waiting around" your program and/or the OS had to do. It's important to note that these numbers are the aggregate of time spent. Your program might compute for 1 second, then wait on the OS for one second, then wait on disk for 3 seconds and repeat this cycle many times while it's running.
Based on the fact that your program took as much system time as user time it was a very IO intensive thing. Reading from disk a lot or writing to disk a lot. RAM is pretty fast, a few hundred nanoseconds usually. So if everything fits in RAM elapsed time is usually just a little bit longer than user time. But disk might take a few milliseconds to seek and even longer to reply with the data. That's slower by a factor of of a million.
We've determined that your processor was "doing stuff" for ~8 + ~8 = ~ 16 seconds. What was it doing for the other ~54 - ~16 = ~38 seconds? Waiting for the hard drive to send it the data it asked for.
UPDATE1:
Matthew had made some excellent points that I'm making assumptions that I probably shouldn't be making. Adam, if you'd care to publish a list of all the rows in your table (datatypes are all we need) we can get a better idea of what's going on.
I just cooked up a little do-nothing program to validate my assumption that time not spent in userspace and kernel space is likely spent waiting for IO.
#include <stdio.h>
int main()
{
int i;
for(i = 0; i < 1000000000; i++)
{
int j, k, l, m;
j = 10;
k = i;
l = j + k;
m = j + k - i + l;
}
return 0;
}
When I run the resulting program and time it I see something like this:
mike#computer:~$ time ./waste_user
real 0m4.670s
user 0m4.660s
sys 0m0.000s
mike#computer:~$
As you can see by inspection the program does no real work and as such it doesn't ask the kernel to do anything short of load it into RAM and start it running. So nearly ALL the "real" time is spent as "user" time.
Now a kernel-heavy do-nothing program (with a few less iterations to keep the time reasonable):
#include <stdio.h>
int main()
{
FILE * random;
random = fopen("/dev/urandom", "r");
int i;
for(i = 0; i < 10000000; i++)
{
fgetc(random);
}
return 0;
}
When I run that one, I see something more like this:
mike#computer:~$ time ./waste_sys
real 0m1.138s
user 0m0.090s
sys 0m1.040s
mike#computer:~$
Again it's easy to see by inspection that the program does little more than ask the kernel to give it random bytes. /dev/urandom is a non-blocking source of entropy. What does that mean? The kernel uses a pseudo-random number generator to quickly generate "random" values for our little test program. That means the kernel has to do some computation but it can return very quickly. So this program mostly waits for the kernel to compute for it, and we can see that reflected in the fact that almost all the time is spent on sys.
Now we're going to make one little change. Instead of reading from /dev/urandom which is non-blocking we'll read from /dev/random which is blocking. What does that mean? It doesn't do much computing but rather it waits around for stuff to happen on your computer that the kernel developers have empirically determined is random. (We'll also do far fewer iterations since this stuff takes much longer)
#include <stdio.h>
int main()
{
FILE * random;
random = fopen("/dev/random", "r");
int i;
for(i = 0; i < 100; i++)
{
fgetc(random);
}
return 0;
}
And when I run and time this version of the program, here's what I see:
mike#computer:~$ time ./waste_io
real 0m41.451s
user 0m0.000s
sys 0m0.000s
mike#computer:~$
It took 41 seconds to run, but immeasurably small amounts of time on user and real. Why is that? All the time was spent in the kernel, but not doing active computation. The kernel was just waiting for stuff to happen. Once enough entropy was collected the kernel would wake back up and send back the data to the program. (Note it might take much less or much more time to run on your computer depending on what all is going on). I argue that the difference in time between user+sys and real is IO.
So what does all this mean? It doesn't prove that my answer is right because there could be other explanations for why you're seeing the behavior that you are. But it does demonstrate the differences between user compute time, kernel compute time and what I'm claiming is time spent doing IO.
Here's my source for the difference between /dev/urandom and /dev/random:
http://en.wikipedia.org/wiki//dev/random
UPDATE2:
I thought I would try and address Matthew's suggestion that perhaps L2 cache misses are at the root of the problem. The Core i7 has a 64 byte cache line. I don't know how much you know about caches, so I'll provide some details. When you ask for a value from memory the CPU doesn't get just that one value, it gets all 64 bytes around it. That means if you're accessing memory in a very predictable pattern -- like say array[0], array[1], array[2], etc -- it takes a while to get value 0, but then 1, 2, 3, 4... are much faster. Until you get to the next cache line, that is. If this were an array of ints, 0 would be slow, 1..15 would be fast, 16 would be slow, 17..31 would be fast, etc.
http://software.intel.com/en-us/forums/topic/296674
In order to test this out I've made two programs. They both have an array of structs in them with 1024*1024 elements. In one case the struct has a single double in it, in the other it's got 8 doubles in it. A double is 8 bytes long so in the second program we're accessing memory in the worst possible fashion for a cache. The first will get to use the cache nicely.
#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
double a;
} PartialLine;
int main()
{
int i, j;
PartialLine* many_lines;
int total_bytes = MANY_MEGS * sizeof(PartialLine);
printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(PartialLine));
many_lines = (PartialLine*) malloc(total_bytes);
PartialLine line;
double x;
for(i = 0; i < 300; i++)
{
for(j = 0; j < MANY_MEGS; j++)
{
line = many_lines[j];
x = line.a;
}
}
return 0;
}
When I run this program I see this output:
mike#computer:~$ time ./cache_hits
Striding through 8388608 total bytes, 8 bytes at a time
real 0m3.194s
user 0m3.140s
sys 0m0.016s
mike#computer:~$
Here's the program with the big structs, they each take up 64 bytes of memory, not 8.
#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
double a, b, c, d, e, f, g, h;
} WholeLine;
int main()
{
int i, j;
WholeLine* many_lines;
int total_bytes = MANY_MEGS * sizeof(WholeLine);
printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(WholeLine));
many_lines = (WholeLine*) malloc(total_bytes);
WholeLine line;
double x;
for(i = 0; i < 300; i++)
{
for(j = 0; j < MANY_MEGS; j++)
{
line = many_lines[j];
x = line.a;
}
}
return 0;
}
And when I run it, I see this:
mike#computer:~$ time ./cache_misses
Striding through 67108864 total bytes, 64 bytes at a time
real 0m14.367s
user 0m14.245s
sys 0m0.088s
mike#computer:~$
The second program -- the one designed to have cache misses -- it took five times as long to run for the exact same number of memory accesses.
Also worth noting is that in both cases, all the time spent was spent in user, not sys. That means that the OS is counting the time your program has to wait for data against your program, not against the operating system. Given these two examples I think it's unlikely that cache misses are causing your elapsed time to be substantially longer than your user time.
UPDATE3:
I just saw your update that the really slimmed down table ran about 10x faster than the regular-sized one. That too would indicate to me that (as another Matthew also said) you're running out of RAM.
Once your program tries to use more memory than your computer actually has installed it starts swapping to disk. This is better than your program crashing, but its much slower than RAM and can cause substantial slowdowns.
I'll try and put together an example that shows swap problems tomorrow.
UPDATE4:
Okay, here's an example program which is very similar to the previous one. But now the struct is 4096 bytes, not 8 bytes. In total this program will use 2GB of memory rather than 64MB. I also change things up a bit and make sure that I access things randomly instead of element-by-element so that the kernel can't get smart and start anticipating my programs needs. The caches are driven by hardware (driven solely by simple heuristics) but it's entirely possible that kswapd (the kernel swap daemon) could be substantially smarter than the cache.
#include <stdio.h>
#include <stdlib.h>
typedef struct {
double numbers[512];
} WholePage;
int main()
{
int memory_ops = 1024*1024;
int total_memory = memory_ops / 2;
int num_chunks = 8;
int chunk_bytes = total_memory / num_chunks * sizeof(WholePage);
int i, j, k, l;
printf("Bouncing through %u MB, %d bytes at a time\n", chunk_bytes/1024*num_chunks/1024, sizeof(WholePage));
WholePage* many_pages[num_chunks];
for(i = 0; i < num_chunks; i++)
{
many_pages[i] = (WholePage*) malloc(chunk_bytes);
if(many_pages[i] == 0){ exit(1); }
}
WholePage* page_list;
WholePage* page;
double x;
for(i = 0; i < 300*memory_ops; i++)
{
j = rand() % num_chunks;
k = rand() % (total_memory / num_chunks);
l = rand() % 512;
page_list = many_pages[j];
page = page_list + k;
x = page->numbers[l];
}
return 0;
}
From the program I called cache_hits to cache_misses we saw the size of memory increased 8x and execution time increased 5x. What do you expect to see when we run this program? It uses 32x as much memory than cache_misses but has the same number of memory accesses.
mike#computer:~$ time ./page_misses
Bouncing through 2048 MB, 4096 bytes at a time
real 2m1.327s
user 1m56.483s
sys 0m0.588s
mike#computer:~$
It took 8x as long as cache_misses and 40x as long as cache_hits. And this is on a computer with 4GB of RAM. I used 50% of my RAM in this program versus 1.5% for cache_misses and 0.2% for cache_hits. It got substantially slower even though it wasn't using up ALL the RAM my computer has. It was enough to be significant.
I hope this is a decent primer on how to diagnose problems with programs running slow.