For Loop Time Complexity - performance

I was curious to find out if you have a two way iterating for loop does it decrease the time complexity and if so by how much? I know most people do a standard for loop
For (int index = 0; index < count - 1; index++)
{
if ( Something(index) == "Hello")
{
return true
}
}
Return False
How much better would it be if you have a two way iterating for loop to reduce time?
int index2 = count - 1;
For ( int index = 0; index < count - 1; index++)
{
if(Something(index) == "Hello" || Something(index2) == "Hello"
{
return true;
}
index2--;
if ( index = index2)
{
return false;
}
}

Given no extra information about the underlying data in the array both actually will be the same order of complexity in terms of array lookups and comparison operations. The order of complexity is not about how many times the loop runs through but rather the total number of operations performed. The first version loops n times and does 1 operation per loop which is n*1 operations in total. The second does n/2 loops with 2 operations per loop which is (n/2)*2=n operations. You can see that these are the same.
However when you practically implement it the second version will do worse on many architectures because of extra cache misses. If the start and end are far away you end up having to go back to main memory to load it into the cache a lot. This is much more expensive than a simple comparison. This is why compilers might optimize the code by transforming it to do something like the first form.

The time complexity is the same, since the complexity is by definition independent of any constant factor (like 2).

Related

Would I be correct in assuming this function runs in linear (O(n)) time?

Here is a function I wrote in pseudo code:
partition(itemList) {
numPackets = calculateNumOfPackets(listSize, packetSize);
indexOfNextItem = 0;
packetQueue = initialize(numPackets);
for (i = 0; < numPackets; i++) {
// Initialized as a fixed-size list
Packet p = createNew(packetSize);
for (j = indexOfNextItem; j < itemList.length; j++) {
// hasRoom() returns false when packet is at capacity
if (p.hasRoom())
// Guaranteed to run in constant time due to predefined capacity
p.add(item[j]);
else {
indexOfNextItem = j; // keep track of next index for inner loop
break;
}
} // end inner
packetQueue.add(p);
} // end outer
return packetQueue;
}
As I hope is clear, this just does partitioning and returns a partitioned queue of "packets" that contains the items of the input list. Now I'm pretty sure this is running in linear time because the inner loop isn't running fully for each iteration of the outer loop; it's only running until the current packet is full, at which point it keeps a cache of the index where it left off and then breaks out of the inner loop. As a result I suspect this is actually running in linear time.
Am I understanding this correctly?
If createNew(packetSize) is linear in packetSize, initialize(numPackets) is linear in numPackets and all of p.hasRoom(), p.add(), itemList[i] and packetQueue.add(p) are O(1), your algorithm is O(listSize) (assuming listSize is len(itemList).
The sketch of the proof is that each inner loop will execute at most packetSize O(1) operations, and that inner loop will be executed at most ceil(listSize / packetSize), thus the total number of operations will run at most (numPackets + 1) * packetSize * n, where n is a constant related to the number of operations done in each loop.
One of your comments states that:
Given a list of items, the algorithm is supposed to add a certain
number of these items to packets (represented as fixed-size lists) and
returns a queue of said packets. So if the input list had 100 items,
and the max packet capacity allowed is 10, then you'd get a queue of
10 packets each having 10 items.
If this is true, then, since each item is only included in 1 packet, your algorithm is linear in the number of items (O(itemList.length)) - assuming that placing items into packets is constant-time.
Counting nested loops only makes sense if the loop counters are independent. If you know that, as in this case, every item in a list is being visited once and only once, and that visit is constant-time, you can confidently state that such code is linear in the number of items.

Recursive algorithm time complexity

I am trying two find the time complexity of a recursive function that merges n number of files.
My solution is T(n)=kc+T(n-(k+1)) where n > 0, T(n)=T(0) where n=0.
Is this correct or is there any other way of finding the time complexity?
Here is the pseudo code,
//GLOBAL VARIABLES
int nRecords = 0...k; //assume there are k records
int numFiles = 0...n; //assume there are n files
String mainArray[0...nRecords]; //main array that stores the file records
void mergeFiles(numFiles) { //params numFiles
fstream file; //file variable
if (numFiles == 0) {
ofstream outfile; //file output variable
outfile.open(directory / mergedfile); // point variable to directory
for (int i = 0; i < sizeOf(mainArray); i++) {
oufile << mainArray[i]; // write content of mainArray to outfile
}
outfile.close(); //close once operation is done
} else {
int i = 0; //file index counter
file.open(directory / nextfile); //open file to be read
if (file.isOpen()) {
while (!file.eof() && i < sizeOf(mainArray)) {
file >> mainArray[i]; //copy contents of file to mainArray
i++; //increase array index
}
}
file.close(); //close once operation is done
mergeFiles(numFiles - 1); //recurse function
}
}
int main() {
mergeFiles(numFiles); //call mergeFile function to main
}
Going by your formula.
T(n)= kc+T(n-(k+1)) = kc+kc+T(n-(k+1)-(k+1)) = kc+kc+...+T(0) = ...
= kc*(n/(k+1)) ~ nc = O(n).
The definition of k is a bit ambiguous in your question, because the formula you provided for T(n) seems to assume you process k records per file, while the definition of mainArray in the code suggests that k represents the total number of records, not the number of records in an individual file.
I will first assume the second definition of k is the correct one, so you have:
n = number of files
k = total number of records in these files = size of array
Time complexity of read/write operations
I think you assume the following two statements -- which read/write one record -- run each in constant time:
file >> mainArray[i];
outfile << mainArray[i];
Note that the time needed for such operations is generally dependent on the size of the record. But as you did not provide that size as something to consider, I will assume records have a constant size, and thus these operations can be considered to run in O(1), i.e. constant time.
About recursion
Although you use recursion, it really concerns tail-recursion, and so the time complexity is not any different as for an iterative algorithm. Either way, the else block is executed n times.
It is in fact not so straightforward to calculate the time complexity with a recursive formula, as you don't know how many records there are in one file, only in all files together. You can work around this, and artificially assume there are k/n records in each file, but I find it much more intuitive to perform the measurement based on the absolute number of times the else block is executed, without the need to express this in a recursive formula.
Measurements
The body of the inner while loop can in total execute k times at the most, and given that you assume there are just as many records in your files, it will execute exactly k times in total.
The final part (where numfiles == 0) has a for loop that also executes k times.
So the ingredients determining the time complexity are:
A constant time for opening/closing a file, multiplied by n
A constant time for reading/writing a record, multiplied by k
So the time complexity is O(n+k)
If definition of k is different
If k should denote the number of records in one file, then your code is wrong, as the size of the array has then to be n.k, instead of k. Suppose that you still intended that, then with a similar reasoning the time complexity is O(n.k)
Note concerning the correctness of the program
In real situation you would have to make sure the size of your array corresponds to the total number of records in your file, and not just assume it is the case. If the array turns out to be smaller you would not be able to store some records; and if on the other hand the array is greater, the code for dumping the array into the output file would include array elements that were never initialised.
You would therefore better use an array with a dynamic size (a stack), so its size corresponds exactly to the number of records that have been actually read into it.

Find all number pairs in a given range

I have N numbers let say 20 30 15 30 30 40 15 20. Now I want to find how many numbers pairs are in a given range.(L and R given).
number pair= both numbers are same.
My approach:
Create a Map of Array, such that key of map= number, and value=ArrayList of indexes at which that number appears. Then I traverse from L to R and for each value in that range I traverse in the corresponding arraylist to find if there is a pair that fits in range, and then increment count.
But I think this approach is too slow. Is there some faster method to do the same?
Example: for above given sequence and L=0 and R=6
Answer=5. Possible pairs are 1 for 20, 1 for 15 and 3 for 30.
I am developing a solution, assuming numbers can be upto 10^8( and non negative).
If you are looking for speed and don't care about memory there's maybe a better way.
You can use a set as an auxiliary data structure to see if a number was found, and then simply walk the array. Pseudo code:
int numPairs = 0;
set setVisited;
for (int i = L; i < R; i++) {
if (setVisited.contains(a[i])) {
// found the second of a pair. count it up and reset.
numPairs++;
setVisited.remove(a[i]);
} else {
// remember that we saw this number, so we can spot the next pair.
setVisited.add(a[i]);
}
New solution... hopefully better this time. Psuedo C-ish code:
// Sort the sub-array a[L..R]. This can be done O(nlogn) using qsort.
// ... code omitted ...
// Walk through the sorted array counting how many times number occurs.
// When the number changes, count how many possibles ways to make pairs
// from the given count.
int totalPairs = 0;
int count = 1;
int current = a[L];
for (i = L+1; i < R; i++) {
if (a[i] == current) { // found another, keep counting
count++;
} else { // found a different one
if (count > 1) { // need at least 2 to make a pair!
totalPairs += factorial(count) / 2;
}
}
// start counting the new one
current = a[i];
count = 1;
}
// count the final one
if (count > 1) {
totalPairs += factorial(count) / 2;
}
The sort runs O(nlgn), and the loop body runs O(n). Interestingly the performance barrier is now factorial. For really long arrays with really high numbers of occurrences, factorial is expensive unless you optimize further.
One way would be to have loop count repetitions but not compute factorial yet -- leave yet another array of counts of numbers. Then sort this array (again Nlg(N)), then walk through this array and re-use previously computed factorial to compute the next one.
Also if this array gets big, you'll need a large integer to represent the total. I don't know the O() performance of large integers off the top of my head.
Cool problem!

How to find an element in a linked list of blocks (containing n elements) as fast as possible?

My data structure is a linked list of blocks. A block contains 31 elements of 4 byte and one 4 byte pointer to the next block or NULL(in summary 128 bytes per block). I add elements from time to time. If the last block is full, I add another block via pointer.
One objective is to use as less memory (= blocks) as possible and having no free space between two elements in a block.
This setting is fix. All code runs on a 32-bit ARM Cortex-A8 CPU with NEON pipeline.
Question:
How to find a specific element in that data structure as quickly as possible?
Approach (right now):
I use sorted blocks and binary search to check for an element (9 bit of the 4 byte are the search criteria). If the desired element is not in the current block I jump to the next block. If the element is not in the last block and the last block is not yet full, I use the result of the binary search to insert the new element (if necessary I make space using memmove within this block). Thus all blocks are always sorted.
Do you have an idea to make that faster?
This is how I search right now: (q->getPosition() is an inline function that just extracts the 9-bit position from the element via "& bitmask")
do
{
// binary search algorithm (bsearch)
// from http://www.google.com/codesearch/
// p?hl=en#qoCVjtE_vOw/gcc4/trunk/gcc-
// 4.4.3/libiberty/bsearch.c&q=bsearch&sa=N&cd=2&ct=rc
base = &(block->points[0]);
if (block->next == NULL)
{
pointsInBlock = pointsInLastBlock;
stop = true;
}
else
{
block = block->next;
}
for (lim = pointsInBlock; lim != 0; lim >>= 1)
{
q = base + (lim >> 1);
cmp = quantizedPosition - q->getPosition();
if (cmp > 0)
{
// quantizedPosition > q: move right
base = q + 1;
lim--;
}
else if (cmp == 0)
{
// We found the QuantPoint
*outQuantPoint = q;
return true;
}
// else move left
}
}
while (!stop);
Since the bulk of the time is spent in the within-block search, that needs to be as fast as possible. Since the number of elements is fixed, you can completely unroll that loop, as in:
if (key < a[16]){
if (key < a[8]){
...
}
else { // key >= a[8] && key < a[16]
...
}
}
else { // key >= a[16]
if (key < a[24]){
...
}
else { // key >= a[24]
...
}
}
Study the generated assembly language and single-step it in a debugger, to make sure the compiler's giving you good code.
You might want to write a little program to print out the above code, as it will be hard to write by hand, or possibly generate it with macros.
ADDED: Just noticed your 9-bit search criterion. In that case, just pre-allocate an array of 512 4-byte words, and index it directly. That's the fastest, and the least code.
ALSO ADDED: If you need to keep your block structure, there's another way to do the unrolled binary search. It's the Jon Bentley method:
i = 0;
if (key >= a[i+16]) i += 16;
if (key >= a[i+ 8]) i += 8;
if (key >= a[i+ 4]) i += 4;
if (key >= a[i+ 2]) i += 2;
if (i < 30 && key >= a[i+ 1]) i += 1; // this excludes 31
if (key == a[i]) // then key is found
That's slower than the if-tree above, because of manipulating i, but could be substantially less code.
Let the number of elements in each block be m and the total number of blocks currently in the list be n. Then the current time complexity of you algorithm is O(n log m).
If you cannot move elements once they are added to a block, then I don't think you can do better in terms of time complexity than what you are already doing. (You could keep track of the maximum and minimum elements in a block, and skip the blocks if the element does not lie in this range. But this is not going to give you much gain. This will also waste space keeping track of the minimum and maximum for each block)
If you can afford to spend time while inserting the element and can move elements from one block to another, then here is a scheme that has time complexity O(log (mn)).
Basically, you keep all elements in sorted order. When a new element has to be inserted, binary search across block boundaries and insert it in its correct location, shifting elements to create space. This will lead to O(nm) time while inserting elements but O(log (mn)) when finding an element.
if this search criterion for an element is fixed, you had better to move the searching into a separate index structure, because the maximal number of elements you distinguish by your search criterion is only 2^9 = 512 indexes, so the maximal size of the search index would be (2 + 4)*512 = 3072, but you could surely use other that static one if you needed, saving some memory. Right now, imagine it as a field of 512 pairs <9-bit index, direct address>, that should be very fast (only one NULL-check and dereference call respectively).
Generally the answer on your question also depend on what other operations you want to perform on your structure and how frequently each of them (including the search ability). If all you want is search(9 bits)->add/modify/read, the your block structure would be useless.
You could write them here and maybe add what language you'r using.
Edit 3:
I just noticed you can't change the blocks' size. But is your search for efficiency reasons only, or do you need the elements of list to be unique (by those 9 bits)?

Is it possible to rearrange an array in place in O(N)?

If I have a size N array of objects, and I have an array of unique numbers in the range 1...N, is there any algorithm to rearrange the object array in-place in the order specified by the list of numbers, and yet do this in O(N) time?
Context: I am doing a quick-sort-ish algorithm on objects that are fairly large in size, so it would be faster to do the swaps on indices than on the objects themselves, and only move the objects in one final pass. I'd just like to know if I could do this last pass without allocating memory for a separate array.
Edit: I am not asking how to do a sort in O(N) time, but rather how to do the post-sort rearranging in O(N) time with O(1) space. Sorry for not making this clear.
I think this should do:
static <T> void arrange(T[] data, int[] p) {
boolean[] done = new boolean[p.length];
for (int i = 0; i < p.length; i++) {
if (!done[i]) {
T t = data[i];
for (int j = i;;) {
done[j] = true;
if (p[j] != i) {
data[j] = data[p[j]];
j = p[j];
} else {
data[j] = t;
break;
}
}
}
}
}
Note: This is Java. If you do this in a language without garbage collection, be sure to delete done.
If you care about space, you can use a BitSet for done. I assume you can afford an additional bit per element because you seem willing to work with a permutation array, which is several times that size.
This algorithm copies instances of T n + k times, where k is the number of cycles in the permutation. You can reduce this to the optimal number of copies by skipping those i where p[i] = i.
The approach is to follow the "permutation cycles" of the permutation, rather than indexing the array left-to-right. But since you do have to begin somewhere, everytime a new permutation cycle is needed, the search for unpermuted elements is left-to-right:
// Pseudo-code
N : integer, N > 0 // N is the number of elements
swaps : integer [0..N]
data[N] : array of object
permute[N] : array of integer [-1..N] denoting permutation (used element is -1)
next_scan_start : integer;
next_scan_start = 0;
while (swaps < N )
{
// Search for the next index that is not-yet-permtued.
for (idx_cycle_search = next_scan_start;
idx_cycle_search < N;
++ idx_cycle_search)
if (permute[idx_cycle_search] >= 0)
break;
next_scan_start = idx_cycle_search + 1;
// This is a provable invariant. In short, number of non-negative
// elements in permute[] equals (N - swaps)
assert( idx_cycle_search < N );
// Completely permute one permutation cycle, 'following the
// permutation cycle's trail' This is O(N)
while (permute[idx_cycle_search] >= 0)
{
swap( data[idx_cycle_search], data[permute[idx_cycle_search] )
swaps ++;
old_idx = idx_cycle_search;
idx_cycle_search = permute[idx_cycle_search];
permute[old_idx] = -1;
// Also '= -idx_cycle_search -1' could be used rather than '-1'
// and would allow reversal of these changes to permute[] array
}
}
Do you mean that you have an array of objects O[1..N] and then you have an array P[1..N] that contains a permutation of numbers 1..N and in the end you want to get an array O1 of objects such that O1[k] = O[P[k]] for all k=1..N ?
As an example, if your objects are letters A,B,C...,Y,Z and your array P is [26,25,24,..,2,1] is your desired output Z,Y,...C,B,A ?
If yes, I believe you can do it in linear time using only O(1) additional memory. Reversing elements of an array is a special case of this scenario. In general, I think you would need to consider decomposition of your permutation P into cycles and then use it to move around the elements of your original array O[].
If that's what you are looking for, I can elaborate more.
EDIT: Others already presented excellent solutions while I was sleeping, so no need to repeat it here. ^_^
EDIT: My O(1) additional space is indeed not entirely correct. I was thinking only about "data" elements, but in fact you also need to store one bit per permutation element, so if we are precise, we need O(log n) extra bits for that. But most of the time using a sign bit (as suggested by J.F. Sebastian) is fine, so in practice we may not need anything more than we already have.
If you didn't mind allocating memory for an extra hash of indexes, you could keep a mapping of original location to current location to get a time complexity of near O(n). Here's an example in Ruby, since it's readable and pseudocode-ish. (This could be shorter or more idiomatically Ruby-ish, but I've written it out for clarity.)
#!/usr/bin/ruby
objects = ['d', 'e', 'a', 'c', 'b']
order = [2, 4, 3, 0, 1]
cur_locations = {}
order.each_with_index do |orig_location, ordinality|
# Find the current location of the item.
cur_location = orig_location
while not cur_locations[cur_location].nil? do
cur_location = cur_locations[cur_location]
end
# Swap the items and keep track of whatever we swapped forward.
objects[ordinality], objects[cur_location] = objects[cur_location], objects[ordinality]
cur_locations[ordinality] = orig_location
end
puts objects.join(' ')
That obviously does involve some extra memory for the hash, but since it's just for indexes and not your "fairly large" objects, hopefully that's acceptable. Since hash lookups are O(1), even though there is a slight bump to the complexity due to the case where an item has been swapped forward more than once and you have to rewrite cur_location multiple times, the algorithm as a whole should be reasonably close to O(n).
If you wanted you could build a full hash of original to current positions ahead of time, or keep a reverse hash of current to original, and modify the algorithm a bit to get it down to strictly O(n). It'd be a little more complicated and take a little more space, so this is the version I wrote out, but the modifications shouldn't be difficult.
EDIT: Actually, I'm fairly certain the time complexity is just O(n), since each ordinality can have at most one hop associated, and thus the maximum number of lookups is limited to n.
#!/usr/bin/env python
def rearrange(objects, permutation):
"""Rearrange `objects` inplace according to `permutation`.
``result = [objects[p] for p in permutation]``
"""
seen = [False] * len(permutation)
for i, already_seen in enumerate(seen):
if not already_seen: # start permutation cycle
first_obj, j = objects[i], i
while True:
seen[j] = True
p = permutation[j]
if p == i: # end permutation cycle
objects[j] = first_obj # [old] p -> j
break
objects[j], j = objects[p], p # p -> j
The algorithm (as I've noticed after I wrote it) is the same as the one from #meriton's answer in Java.
Here's a test function for the code:
def test():
import itertools
N = 9
for perm in itertools.permutations(range(N)):
L = range(N)
LL = L[:]
rearrange(L, perm)
assert L == [LL[i] for i in perm] == list(perm), (L, list(perm), LL)
# test whether assertions are enabled
try:
assert 0
except AssertionError:
pass
else:
raise RuntimeError("assertions must be enabled for the test")
if __name__ == "__main__":
test()
There's a histogram sort, though the running time is given as a bit higher than O(N) (N log log n).
I can do it given O(N) scratch space -- copy to new array and copy back.
EDIT: I am aware of the existance of an algorithm that will proceed through. The idea is to perform the swaps on the array of integers 1..N while at the same time mirroring the swaps on your array of large objects. I just cannot find the algorithm right now.
The problem is one of applying a permutation in place with minimal O(1) extra storage: "in-situ permutation".
It is solvable, but an algorithm is not obvious beforehand.
It is described briefly as an exercise in Knuth, and for work I had to decipher it and figure out how it worked. Look at 5.2 #13.
For some more modern work on this problem, with pseudocode:
http://www.fernuni-hagen.de/imperia/md/content/fakultaetfuermathematikundinformatik/forschung/berichte/bericht_273.pdf
I ended up writing a different algorithm for this, which first generates a list of swaps to apply an order and then runs through the swaps to apply it. The advantage is that if you're applying the ordering to multiple lists, you can reuse the swap list, since the swap algorithm is extremely simple.
void make_swaps(vector<int> order, vector<pair<int,int>> &swaps)
{
// order[0] is the index in the old list of the new list's first value.
// Invert the mapping: inverse[0] is the index in the new list of the
// old list's first value.
vector<int> inverse(order.size());
for(int i = 0; i < order.size(); ++i)
inverse[order[i]] = i;
swaps.resize(0);
for(int idx1 = 0; idx1 < order.size(); ++idx1)
{
// Swap list[idx] with list[order[idx]], and record this swap.
int idx2 = order[idx1];
if(idx1 == idx2)
continue;
swaps.push_back(make_pair(idx1, idx2));
// list[idx1] is now in the correct place, but whoever wanted the value we moved out
// of idx2 now needs to look in its new position.
int idx1_dep = inverse[idx1];
order[idx1_dep] = idx2;
inverse[idx2] = idx1_dep;
}
}
template<typename T>
void run_swaps(T data, const vector<pair<int,int>> &swaps)
{
for(const auto &s: swaps)
{
int src = s.first;
int dst = s.second;
swap(data[src], data[dst]);
}
}
void test()
{
vector<int> order = { 2, 3, 1, 4, 0 };
vector<pair<int,int>> swaps;
make_swaps(order, swaps);
vector<string> data = { "a", "b", "c", "d", "e" };
run_swaps(data, swaps);
}

Resources