I am trying to figure out how to efficiently figure out all the numbers which can be produced as a linear combination of some set, say the first few abundant numbers(12,18,20,24). The problem is the way I currently want to approach it is to run through all the numbers a*12+b*18+c*20+d*24 where the sum total doesnt exceed 100. Now I thought of 2 ways. A multiply nested while loop or one while loop which increments different things based on a conditional at the top of the while loop. My problem with both methods is that I don't know how many conditionals or nested loops I'll need until runtime as I don't know how many different numbers I'm going to be summing yet. Is there any way to write my program so that its nested n times for n number, or so that theres n conditions for n numbers.
Here is the outline of the switch block:
int count=1;
while(true){
if(num2<smal){
switch(count){
case 1:
a++;
break;
case 2:
b++;
break;
case 3:
c++;
break;
case 4:
d++;
break;
}
}
else {
switch(count){
case 1:
if(a!=0){
a=0;
b++;
}
else{count++;}
break;
case 2:
if(b!=0){
b=0;
c++;
count--;
}
else{count++;}
break;
case 3:
if(c!=0){
c=0;
d++;
count--;
}
else{count++;}
break;
case 4:
break;
}
}
//num is update here
if(count==4){break;}
}
I don't know what language you're using, but if you don't know how many levels deep you might need to go, then you should consider recursion (as long as its not hundreds or thousands; you'll blow the stack).
In case you're not familiar with recursion, its where you have a function call itself. Here's a silly example.
Result doStuff(Stuff stuff, int level) {
SubResult sr = null;
if(stuff > 0) {
sr = doStuff(stuff, level -1);
}
Result r = interesting_calculation(stuff, sr);
return r;
}
Recursion can be tricky, but sometimes its the right tool for the job.
Related
There are similar questions, but my main concern here is regarding processing time.
I have two PCL, both of type pcl::PointXYZL, this is, there is a label/id info together with each point. I want to remove in PC B all the points existent in A (recognized by the label info).
An iteration of this type takes too much time.
I decide to save the labels from PC A into boost::container::flat_set<int> labels, and set cloud B as std::map<int, pcl::PointXYZL> cloud_B, where the key is the label/id from the point. Then I do:
for(boost::container::flat_set<int>::iterator it = labels.begin() ; it != labels.end(); ++it){
if ( auto point{ cloud_B.find( *it ) }; point != std::end(cloud_B)){
cloud_B.erase(*it);
}
}
It is now much much faster, but honestly, I think there may have a more efficient solution.
I also try:
for(boost::container::flat_set<int>::iterator it = labels.begin() ; it != labels.end(); ++it){
try{
cloud_B.erase(*it);
throw 505;
}
catch (...){
continue;
}
}
But it takes more time than the first example I brought.
I appreciated any help with this!
Both of your solution are roughly O(n^2) complexity.
Use unordered_set<std::uint32_t> to check if it exist or not. the average case of unordered_set<std::uint32_t>::count is constant, so it should be better than your solution
Sample code(untested) but you get the rough idea.
pcl::PointCloud<pcl::PointXYZL> cloud_a;
pcl::PointCloud<pcl::PointXYZL> cloud_b;
... (fill cloud)
// generate unordered_set<uint32_t> O(n)
std::unordered_set<std::uint32_t> labels_in_cloud_a;
for (int i = 0; i < cloud_a.size(); ++i) {
labels_in_cloud_a.insert(cloud_a.points[i].label);
}
// indices where in cloud_b where label doesn't exist in labels_in_clouda O(n*1) = O(n)
std::vector<int> indices;
for (int i = 0; i < cloud_b.size(); ++i) {
if (labels_in_cloud_a.count(cloud_b.points[i].label) == 0) {
indices.push_back(i);
}
}
// only indices that doesn't exist in cloud_a will remain. O(n)
pcl::copyPointCloud(cloud_b, indices, cloud_b);
This is my solution so far...
First, I set both clouds as type std::map<int, pcl::PointXYZL>. Remembering that I want to remove from cloud_B all points that are in cloud_A. Then:
std::map<int, pcl::PointXYZL>::iterator it = cloud_B.begin();
for ( ; it != cloud_B.end(); ){
if (auto point{ cloud_A.find(it->first) }; point != std::end(cloud_A)){ // if the label was found
it = cloud_B.erase(it);
}
else
++it;
}
So let's consider the case where there is the below snippet:
if(x == 0)
{
for(var i = 0; i < 5; ++ i)
{
//do something
}
}
else
{
for(var i = 0; i < 5; ++ i)
{
//do something different
}
}
As you can see, both of the conditions iterate through the same for loop but perform different actions based on the condition. My question is, is it a bad practice to have something like this:
for(var i = 0; i < 5; ++ i)
{
if(x == 0){
// do something
}else{
// do something else
}
}
The reason being that I think that this may be a bad practice is due to the fact that for every instance of the loop, a conditional check is being executed on it against the first snippet where he condition is checked first and then the loop is executed. Am I mistaken?
Unless you're writing code that must run at high speed and be super efficient, go for code readability over efficiency. In this case, I'd say the 2nd example is clearer as it's less code, and a commonly followed pattern.
I suppose another factor to consider though is that the 2nd example seems to imply that x might change value, whereas the 1st example doesn't. This would be worth putting a comment nearby explaining the choice.
My gut instinct would agree with you, that the 1st example would be more efficient, but in reality the compiler's optimisations are likely to make short work of examples like the above - they'll both likely be equal in performance.
Here's a pretty impressive list of optimisations that can be made on loops, to give you an idea, and also branch optimisation (see the answer), may have an effect if the loops run for many iterations.
If you use
if(x = 0)
{
for(var i = 0; i < 5; ++ i)
{
//do something
}
}
else
{
for(var i = 0; i < 5; ++ i)
{
//do something different
}
}
then you have done one comparison and a loop performing the tasks in the scope for 5 times.
When you use
for(var i =0; i < 5; ++ i)
{
if(x = 0){
// do something
}else{
// do something else
}
}
then a loop performing the tasks in the scope for 5 times. In addition to that, the comparison is done 5 times.
On first sight the former results in least amount of instructions. But the compiler/interpreter may perform an optimization step by doing that comparison one time. But this depend of the compiler/interpreter. If you have a good comprehending of how a compiler/interpreter works for a given language you're programming on, you can "abuse" the knowledge to write a readable code and yet having a well-optimized output.
Yet, another alternative is to work with functions. This approach is only useful if the variable x is constant throughout the loop, with other words: you don't modify it during the loop itself. This is the case at your first example. However, in the second example: x can be changed in the loop, which leads to running either the if {} or else{} block during the loop, executing two different functions.
Alternative: Select a function first and use it in the loop. It may be more useful if you have lots of different tasks to perform. Just select a function beforehand. Some programming languages allows this, another don't. So it depends of the language itself.
// check which function to run
variable runThisFunction;
if (x = 0) runThisFunction = { // do something }
else runThisFunction = { // do something else }
// loop 5 times using the function
for(var i =0; i < 5; ++ i)
{
call runThisFunction with arg i provided.
}
I'm learning data structures from a book. In the book, they have snippets of pseudocode at the end of the chapter and I'm trying to determine the time complexity. I'm having a little bit of difficulty understand some concepts in time complexity.
I have two pieces of code that do the same thing, seeing if an element in an array occurs at least 3 times; however, one uses recursion and the other uses loops. I have answers for both; can someone tell me whether or not they're correct?
First way (without recursion):
boolean findTripleA(int[] anArray) {
if (anArray.length <= 2) {
return false;
}
for (int i=0; i < anArray.length; i++) {
// check if anArray[i] occurs at least three times
// by counting how often it occurs in anArray
int count = 0;
for (int j = 0; j < anArray.length; j++) {
if (anArray[i] == anArray[j]) {
count++;
}
}
if (count >= 3) {
return true;
}
}
return false;
}
I thought the first way had a time complexity of O(n^2) in best and worst case scenario because there is no way to avoid the inner for loop.
Second way (with recursion):
public static Integer findTripleB(int[] an Array) {
if (anArray.length <= 2) {
return false;
}
// use insertion sort to sort anArray
for (int i = 1; i < anArray.length; i++) {
// insert anArray[i]
int j = i-1;
int element = anArray[i];
while (j >= 0 && anArray[j] > element) {
anArray[j+1] = anArray[j];
j--;
}
anArray[j+1] = element;
}
// check whether anArray contains three consecutive
// elements of the same value
for (int i = 0; i < anArray.length-2; i++) {
if (anArray[i] == anArray[i+2]) {
return new Integer(anArray[i]);
}
}
return null;
}
I thought the second way had a worst case time complexity of O(n^2) and a best case of O(n) if the array is sorted already and insertion sort can be skipped; however I don't know how recursion plays into effect.
The best case for the first one is O(n) - consider what happens when the element appearing first appears 3 times (it will return after just one iteration of the outer loop).
The second one doesn't use recursion, and your running times are correct (technically insertion sort won't 'be skipped', but the running time of insertion sort on an already sorted array is O(n) - just making sure we're on the same page).
Two other ways to solve the problem:
Sort using a better sorting algorithm such as mergesort.
Would take O(n log n) best and worst case.
Insert the elements into a hash map of element to count.
Would take expected O(n) worst case, O(1) best case.
Hi I have just sat my final year programming exam, I was asked the question:
what sorting and searching algorithms are used to solve the 8 queens problem.
Correct me if I am wrong but there is no sorting at all...
I understand that there is a basic level of searching needed when placing the queen and during backtracking, but where does sorting come into this? if at all?
Below is what I have been looking at and just cant see it.
public class Queens
{
int[] positions;
int counter = 0;
boolean isFinished = false;
public Queens()
{
positions = new int[8];
placeQueens(0);
}
public boolean canPlaceQueen(int row, int column)
{
for (int i = 0; i < row; i++)
{
if (positions[i] == column || (i - row)== (positions[i]-column) || (i-row)== (column - positions[i]))
{
return false;
}
}
return true;
}
public void placeQueens(int row)
{
counter++;
printQueens();
for (int column = 0; column < 8; column++)
{
if (canPlaceQueen(row, column))
{
positions[row] = column;
if (row == 8-1)
{
System.out.println("FINAL " +counter);
isFinished = true;
printQueens();
}
else if(!isFinished)
{
placeQueens(row+1);
}
}
}
}
public void printQueens()
{
for (int i = 0; i < 8; i++)
{
for (int j = 0; j< 8; j++)
{
if (positions[i] == j)
{
System.out.print("Q ");
}
else
{
System.out.print("* ");
}
}
System.out.println();
}
System.out.println();
}
}
I think in this case you're misinterpreting what "sorting" means. In order for backtracking to work you need to be analyzing positions in some sort of predictable ordering. If your algorithm is not analyzing positions in a set order, then when you "prune" a set of positions, you may have pruned a valid position. Without this "ordering" or tree like structure of positions backtracking does not work. You do not, however, need to pre-sort a set of positions or anything like that. This would, in fact, defeat the purpose of backtracking.
The idea is that, some combination of positions never even have to be built. Once a conflict is found ALL iterations involving that combination are no longer even considered. It is the ordering in which these combinations are built that is a concern, not sorting them ahead of time. All combinations must be built and considered in proper oder. This allows u to know when we give up on a "branch" that all combinations built on this branch would have been equally(or even worse) incorrect as the option we just rejected, otherwise you can "over prune" your result set, and miss a proper combination. But no NlogN sorting algorithms are required. At least not in the example of the n-queens problem. In fact, if you pre-built all positions and sorted them, you are completely ignoring the dynamic programming elements that allow us to speed up the computation of this problem considerably.
http://en.wikipedia.org/wiki/Backtracking
Below is the problem description and algorithm that I have written. Is there anything to be done to improve this algorithm?
Given an integer array of unknown size, containing only numbers between 0 and 30, write a function to return an integer array containing all of the duplicates.
int[] findDupes(int[] array) {
int[] found = new int[30];
int[] dupes = new int[30];
int dupesCount = 0;
for (int i = 0; i < array.length; i++) {
if (found[array[i]] <= 1) {
found[array[i]]++;
}else{
continue;
}
if(found[array[i]] > 1){
dupes[dupesCount++] = array[i];
if (dupesCount == 30)
break;
}
}
if (dupesCount == 0)
return new int[0];
return dupes;
}
Am assuming that the best case for running this algorithm would n or 30 whichever is lower
and the worst case for running this algorithm is n, since I have to scan the entire array to find duplicates. Any comments?
You've got the right idea, but ask yourself, what does this block do, exactly
if(found[array[i]] > 1){
dupes[dupesCount++] = array[i];
if (dupesCount == 30)
break;
}
when does it fire?
Walk through your code with a couple of samples including an array of 1000 occurrences of 0.
What exactly are you returning? why do you need to special case 0.
Also the best case run time is going to be greater than 30. What is the minimum input that makes it stop before reaching the end?
Need more precise definition of the problem. Are there only 1 or 2 occurrences of an integer? Can there be 0 or 3 occurrences?
If there are only 1 or 2 occurrences of an integer, and integers range from 1 to 30; I would have a BitSet, and flip the bit as I find an integer. When I am done reading the original array, all the bits that are 0 will represent the integers containing duplicates.
Something strange:
if (found[array[i]] <= 1)
}else{
continue;//happens if found[array[i]] > 1
}
if(found[array[i]] > 1){//usually don't get here, because of continue
Is the continue a fix to only add a number once? Although it works, the code is misleading.
Do you have to return a 30 length array if there is only one duplicate?
I'd suggest making your code slower and better by splitting tasks.
here is the modified version with comments embedded.
int[] found = new int[3];
int[] dupes = new int[3];
int dupesCount = 0;
for (int i = 0; i < array.length; i++) {
if (found[array[i]] <= 1) {
found[array[i]]++;
}
if(found[array[i]] > 1){ //duplicate found
dupes[dupesCount++] = array[i];
// if 30 duplicate are found don't scan the array any more
// since the distinct numbers are only 30
if (dupesCount == 30)
break;
}
}
if (dupesCount == 0)
return null;
return dupes;