Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
How can we write an algorithm to add multiple elements say 5 elements {1,2,3,4,5} in an queue
I searched a lot but found algorithm to insert only one item but I don't know how to run a loop to insert multiple elements.
the algorithm to insert one item which I found is
Start
Check if the Queue is full or not if(rear=N-1) THEN print “Queue is Full” and exit else goto step 3
Increment the rear
++rear;
Add the item at the ‘rear’ position Q[rear]= item;
Exit
0 Start
1 Initialize index variable to 0
2 Check if the number of the elements inserted (iterations) is equal M (where M is the number of the elements to insert). If it is, go to the step 7.
3 Check if the Queue is full or not if(rear=N-1) THEN print “Queue is Full” and exit
4 Increment the rear ++rear;
5 Add the item at the ‘rear’ position Q[rear]= items[i];
6 Increment index variable and go to the step 2
7 Exit
Alternatively, you could check if the queue has space to put M elements before the loop. Steps from 1 to 6 can be implemented using for loop (of course, any other loop should do the trick).
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I have a huge csv file which has 5000 columns and 5,000,000 rows. I know that there are some columns in this file which are exactly the same. I want to identify such columns. Please not that I cannot fetch this huge file into the memory and runtime is also important.
Exactly the same?
I suppose you can verify it with hash functions.
step 1 - You can load the 5'000 values of first row and calculate 5'000 hash values; exclude the values (the columns) without a corresponding value.
step 2 - Load the value (only the column survived) and calculate the hash of the concatenation of preceding hash with the loaded value; exclude the values (the columns) without a corresponding value.
following steps: exactly as step 2: load and concatenate/hash, excluding columns without matches.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Say we have a list of strings and we can't load the entire list in to memory, but we can load parts of the list from file. What would be the best way to solve this?
One approach would be to use external sort to sort the file, and then remove the duplicates with a single iteration on the sorted list. This approach requries very little extra space and O(nlogn) accesses to the disk.
Another approach is based on hashing: use a hashcode of the string, and load a sublist that contains all strings whom hashcode is in a specific range. It is guaranteed that if x is loaded, and it has a duplicate - the dupe will be loaded to the same bucket as well.
This requires O(n*#buckets) accesses to the disk, but might require more memory. You can invoke the procedure recursively (with different hash functions) if needed.
My solution would be to do a merge sort, which allows for external memory usage. After sorting, searching for duplicates would be as easy as only ever comparing two elements.
Example:
0: cat
1: dog
2: bird
3: cat
4: elephant
5: cat
Merge sort
0: bird
1: cat
2: cat
3: cat
4: dog
5: elephant
Then simply compare 0 & 1 -> no duplicates, so move forward.
1 & 2 -> duplicate, remove 1 (this could be as simple as filling it with an empty string to skip over later)
compare 2 & 3 -> remove 2
etc.
The reason for removing 1 & 2 rather than 2 & 3 is that it allows for a more efficient comparison -- you don't have to worry about skipping indices that have been removed.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Consider a square matrix, all slots filled with zeroes.This will be the battlefield. Now, to place ships, I indicate the by putting 1. A ship can be of size 1, 2, 3 - meaning two or three continuous blocks should be 1. They can also be horizontal or vertical. Now , what is the best strategy for an enemy to search for my ships. He has no idea how I have placed my ships. What could be a good strategy to search the matrix ? OR How do I make the CPU a better player when it comes to making 'smart moves' ?
Search randomly
Search, and when you find one attack the neighbouring blocks to check if it size 2/3 ship.
Also, the initial positioning of the CPU can be based on the previous winning positions and not just based on random numbers.
Any other idea ..... ??
The idea can be extended to form a larger board game of 20 x 20 matrix with multiple ships.An example is given below.
0 0 0 0 0 0
0 1 0 0 0 0
0 0 0 1 1 0
0 1 0 0 0 0
0 1 0 0 0 0
0 1 0 0 0 0
Any help would be much appreciated !!
As you have a ship with a size of 1, you basically need to enumerate through all of the fields and check the neighbors for bigger ships. You can save some work by using specific order like going through all of the rows:
1 2 3 4 5 6
7 8 9 10 11 12
13 ...
If you have to detect bigger ships, you check the following two right fields (if not out of bounds; you check the first field and if it's a part of a ship, then you check the second) as well of the two bottom fields (again boundary check). Using that traversal you ensure that you never check the left and the top fields for bigger ship. When you check for bigger ships, you should remember how many positions you have visited right and skip those after moving along.
It's just a suggestion and is relatively efficient one. With some memory usage you can avoid double visiting fields after checking bottom, but this won't lead to a performance win in the real life.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I've lost all my hair on this. I've got a 3 dimensional array.
Initialized as Array.new(rows) { Array.new(columns) { Array.new(CHANNELS, 0) } }
Everything seems to work, but when i try to add columns of padding, i can't figure out how the 2nd dimension gets whacked.
i've done this about 5 different ways and keep coming up with the wrong size for the second dimension. The first part works okay, i initialize an array stack_edge to be an array of 1xn pixels and push/unsift it to the beginning and end of image_data. which then becomes and array 0...pads...original height...original_height_2*pads) rows.
But then i try and push & unshift pixexls onto the columns of each row and get an array that thinks it's wider than it is. It reports a width of 110 pixels wider than the original. I can't figure out where the other 100 pixels come from. They're not there, never notice before since i calculate the with instead of interrogating for it. (old_width+2*pad_s) worked and all the data appears to be in place, but width= #image_data[row].size, whacks out with the 110 pixel size. I'm guessing it's because the pixel i'm pushing on is a 10x1 array, and i put 5 in the front and 5 in the back, so 110 by some strange math. Can you tell me what i'm doing wrong?
(0...pad_s).each {
#image_data.unshift(stack_edge)
#image_data.push(stack_edge)
}
self.rows=#image_data.size
edge=Array.new(image_data[0][0].size)
a='whats up'
(0...#image_data.size).each { |i|
(0...pad_s).each{
#image_data[i].unshift(edge)
#image_data[i].push([edge)
}
}
You are pushing/unshifting the same stack_edge array 5 times on the front and 5 times on the back of #image_data. So when you run over #image_data and push/unshift 10 "edges" onto each array in #image_data, those 10 edges are added to stack_edge 10 times. Because the same stack_edge appears in 10 different positions in #image_data. Get it?
What you need is:
#image_data.unshift(stack_edge.dup)
#image_data.push(stack_edge.dup)
This is what is called an "aliasing bug". They tend to be a problem in OO languages.
Here's my question---
I want to create a Question System that helps me to pick out a random Question. I have two parameters: how many questions to ask, and how many unique questions.
For example, I have 6 unique Questions (1,2,3,4,5,6)
And I have to ask questions 10 times (1,2,3,4,5,6,1,2,3,4)
What logic I need is
I want it be random
"Every question should been picked at least one times"
"Each question shouldn't repeat at any time" example: (2,6,6,3,4,1,)<---the 6type of question is repeated at place 2 and 3.
My logic is poor....
Can anyone write me a Method can return an Array that contains like (3,6,5,1,2,4,6,2,1,3)?
Thanks for your help!
Make an array called 'chosen' which is the same size as the question array. Set each value of the chosen array to 0. Each time you randomly choose a question, accept it only if its chosen[n] value is 0, and then set chosen[n] to 1. When all values of the chosen array equal 1, then reset all values to 0.