TI-84 Plus CE gcd of every element in list L1 - greatest-common-divisor

I have a list in L1 and I would need help finding the greatest common divider of every element. The list has a variable length(size?).
An example:
L1
18
24
36
should return 6

This code snippet should work as a program. Basically the calculator has a built-in gcd() function, but it only works on two numbers at a time.
So what we're doing here is cycling through the list and gcd'ing our previous gcd with the next number.
:L1(1)->D
:For(X,1,dim(L1))
:gcd(D,L1(x))->D
:End
:Disp D

Related

What is the most efficient way to sort a stack using a limited set of instructions?

I know an almost identical question as already been asked
here,
but I do not find the
provided answers to be very helpful since the goal of the exercise was not clearly stated in the OP.
I have designed a simple algorithm to solve the exercise described below, but I would like help in order to improve it or
design a more efficient one.
Exercise
Given a stack A filled with n random integers (positive and/or negative) with no duplicates, an empty stack B and the eleven instructions listed below, print to the screen the shortest list made out of those instructions only such that when all the instructions are followed in order, A is sorted (the smallest number must be on top of the stack).
sa : swap a - swap the first 2 elements at the top of stack a
sb : swap b - swap the first 2 elements at the top of stack b.
ss : sa and sb at the same time.
pa : push a - take the first element at the top of b and put it at the top of
a.
pb : push b - take the first element at the top of a and put it at the top of
b.
ra : rotate a - shift up all elements of stack a by 1. The first element
becomes the last one.
rb : rotate b - shift up all elements of stack b by 1. The first element
becomes the last one.
rr : ra and rb at the same time.
rra : reverse rotate a - shift down all elements of stack a by 1. The
last element becomes the first one.
rrb : reverse rotate b - shift down all elements of stack b by 1. The
last element becomes the first one.
rrr : rra and rrb at the same time.
The goal of the exercise if to find the shortest list of stack instructions such that when followed A is sorted. What matters most is the size of the list, not the complexity of the algorithm we use to find such a list.
Algorithm
For now I have implemented this very simple algorithm :
Gather all the numbers in the array and sort it such that the
smallest number is at index 0.
Take the first number in the sorted array, we'll call it x. We need to move x to the top of the stack then push it to B so :
If x is in second position, swap.
If x is closer to the top of the stack, rotate until x is on top.
If x is closer to the bottom of the stack, reverse until x is on top.
After each operation check if the stack is sorted.
If it is not, push the first element of the stack onto B, take the next element in the array and repeat.
When only two elements are left in A, check if they are ordered, if not swap them.
Push all the elements from B back onto A.
This algorithm works pretty well when n is small but takes way too long when n gets large. On average I get :
30 instructions for n = 10.
2500 instructions for n = 100.
60000 instructions for n = 500.
250000 instructions for n = 10000.
I would like to go below 5000 steps for n = 500 and below 500 steps for n = 100.
This is a variation on https://stackoverflow.com/a/38165541/585411 which you already rejected. But hopefully you'll understand my explanation of how to do a bottom up mergesort better.
A run is a group of numbers in sorted order. At first you have many runs, presumably most are of small length. You're done when you have one run, in stack A.
To start, keep rotating A backwards while the bottom element is <= the top. This will position the the start of a run at the top of A.
Next, we need to split the runs evenly between A and B. The way we do it is go through A once, looking for runs. The first run goes at the bottom of A, the second run goes at the bottom of B, and so on. (Placing at the bottom of A just needs ra until the run is done. Placing at the bottom of B means pb then rb.)
Once we've split the runs, we either just placed a run at the bottom of A and A has one more run than B, or we just placed a run at the bottom of B and they have the same number of runs.
Now start merging runs, while you continue switching between A and B. Every time you merge, if you merged to A then A wound up with one more run. If you merged to B you have the same number of runs.
Merging a run to B looks like:
if top of A < top of B:
pb
rb
while bottom of B <= top of B:
if top of A < top of B:
pb
rb
while bottom of B <= top of A:
pb
rb
Merging a run to A is similar, just reversing the roles of the stacks.
Continue until B is empty. At that point B has 0 runs, while A has one. Which means that A is sorted.
This algorithm will take O(n log(n)) comparisons.
The problem has changed a lot since I first answered, so here are ideas for optimizations.
First, when splitting, we can do better than just dealing runs to A and B. Specifically we can put rising runs at the bottom of A, and push falling runs onto B (which leaves them rising). With an occasional sa to make the runs longer. These operations can be interleaved, so, for instance, we can deal out 5 2 3 1 4 with pb ra ra pb ra and then merge them with pa ra ra ra pa ra thereby sorting it with 11 operations. (This is probably not optimal, but it gives you the idea.) If you're clever about this you can probably start with an average run length in both piles of around 4 (and maybe much better). And during the splitting process you can do a lookahead of several instructions to figure out how to efficiently wind up with longer runs. (If you have 500 elements in runs of 4 that's 125 runs. The merge sort pass now should be able to finish in 7 passes.)
Are we done finding potential optimizations? Of course not.
When we start the merge passes, we now have uneven numbers of runs, and uneven numbers of elements. We are going to merge pairs of runs, place them somewhere, merge pairs again, place them somewhere, etc. After the pass is done, we'd like two things to be true:
The average length of run in both stacks should be about the same (merging runs of similar lengths is more efficient).
We want to have used as few operations as possible. Since merging n into m takes 2n+m operations, it matters where we put the merge.
We can solve for both constraints by using dynamic programming. We do that by constructing a data structure with the following information:
by the number of merged runs created:
by the number of runs put in `A`
by the number of elements put in `A`
minimal number of required steps
last stack merged into
We can then look through the part with the largest number of runs created, and figure out what makes the average run size as close as possible. And then walk back to figure out which sequence of merges got there in the minimum number of steps. And then we can work out what sequence of steps we took, and where we wound up.
When you put all of this together, I'm dubious that you'll be able to sort 500 elements in only 5000 steps. But I'd be surprised if you can't get it below 6000 on average.
And once you have all that, you can start to look for better optimizations still. ("We don't care how much analysis is required to produce the moves" is an invitation to spend unlimited energy optimizing.)
The question needs to be edited. The exercise is called "push swap", a project for students at school 42 (non-accredited school). A second part of the project is called "checker" which verifies the results of "push swap".
Here is a link that describes the push swap challenge | project. Spoiler alert: it also includes the authors approach for 100 and 500 numbers, so you may want to stop reading after the 3 and 5 number examples.
https://medium.com/#jamierobertdawson/push-swap-the-least-amount-of-moves-with-two-stacks-d1e76a71789a
The term stack is incorrectly used to describe the containers a and b (possibly a French to English translation issue), as swap and rotate are not native stack operations. A common implementation uses a circular doubly-linked list.
push swap: input a set of integers to be placed in a, and generate a list of the 11 operations that results in a being sorted. Other variables and arrays can be used to generate the list of operations. Some sites mention no duplicates in the set of integers. If there are duplicates, I would assume a stable sort, where duplicates are to be kept in their original order. Otherwise, if going for an optimal solution, all permutations of duplicates would need to be tested.
checker: verify the results of push swap.
Both programs need to validate input as well as produce results.
One web site lists how push swap is scored.
required: sort 3 numbers with <= 3 operations
required: sort 5 numbers with <= 12 operations
scored: sort 100 numbers with <= 700 operations max score
900 operations
1100 operations
1300 operations
1500 operations min score
scored: sort 500 numbers with <= 5500 operations max score
7000 operations
8500 operations
10000 operations
11500 operations min score
The following gives an idea of what is allowed in an algorithm to generate a list of operations. The first step is to convert the values in a into ranks (the values are never used again). In the case of duplicates, use the order of the duplicates when converting to ranks (stable sort), so there are no duplicate ranks. The values of the ranks are where the ranks belong in a sorted array:
for(i = 0; i < n; i++)
sorted[rank[i]] = rank[i].
For example, the values {-2 3 11 9 -5} would be converted to {1 2 4 3 0}: -2 belongs at sorted[1], 3 at sorted[2], ..., -5 at sorted[0]. For a stable sort where duplicates are allowed, the values {7 5 5 1 5} would be converted to {4 1 2 0 3}.
If a has 3 ranks, then there are 6 permutations of the ranks, and a maximum of 2 operations are needed to sort a:
{0 1 2} : already sorted
{0 2 1} : sa ra
{1 0 2} : sa
{1 2 0} : rra
{2 0 1} : ra
{2 1 0} : sa rra
For 5 ranks, 2 can be moved to b using 2 operations, the 3 left in a sorted with a max of 2 operations, leaving at least 8 operations to insert the 2 ranks from b to into a, to end up with a sorted a. There only 20 possible ways to move 2 ranks from b into a, small enough to create a table of 20 optimized sets of operations.
For 100 and 500 numbers, there are various strategies.
Spoiler:
Youtube video that shows 510 operations for n=100 and 3750 operations for n=500.
https://www.youtube.com/watch?v=2aMrmWOgLvU
Description converted to English:
Initial stage :
- parse parameters
- Creation of a stack A which is a circular doubly linked list (last.next = first; first.prec = last
- Addition in the struct of a rank component, integer from 1 to n.
This will be much more practical later.
Phase 1 :
- Split the list into 3 (modifiable parameter in the .h).
- Push the 2 smallest thirds into stack B and do a pre-sort. do ra with others
- Repeat the operation until there are only 3 numbers left in stack A.
- Sort these 3 numbers with a specific algo (2 operations maximum)
Phase2:
(Only the ra/rra/rb/rrb commands are used. sa and sb are not used in this phase)
- Swipe B and look for the number that will take the fewest moves to be pushed into A.
There are each time 4 ways to bring a number from B to A: ra+rb, ra+rrb, rra+rb, rra+rrb. We are looking for the mini between these 4 ways.
- Then perform the operation.
- Repeat the operation until empty B.
Phase 3: If necessary rot stack A to finalize the correct order. The shorter between ra or rra.
The optimization comes from the fact of the maximum use of the double rotations rr and rrr
Explanation:
Replace all values in a by rank.
For n = 100, a 3 way split is done:
ranks 0 to 32 are moved to the bottom of b,
ranks 33 to 65 are moved to the top of b,
leaving ranks 66 to 99 in a.
I'm not sure what is meant by "pre-sort" (top | bottom split in b?).
Ranks 66 to 99 in a are sorted, using b as needed.
Ranks from b are then inserted into a using fewest rotates.
For n = 500, a 7 way split is done:
Ranks 0 to 71 moved to bottom of b, 72 to 142 to top of b, which
will end up in the middle of b after other ranks moved to b.
Ranks 143 to 214 to bottom of b, 215 to 285 to top of b.
Ranks 286 to 357 to bottom of b, 358 to 428 to top of b.
Leaving ranks 429 to 499 in a.
The largest ranks in b are at the outer edges, smallest in the middle,
since the larger ranks are moved into sorted a before smaller ranks.
Ranks in a are sorted, then ranks from b moved into a using fewest rotates.

How to make a equation hold true with 1000 numbers and 3 types of operators

I have 1000 numbers and a target that's 37. I need to add plus, minus or times between all my numbers to make the equation hold truth. I have to use all numbers. The min value of my numbers is 0 and the max is 9. The instructions doesn't say anything about the usage of parentheses.
Ex: [2,3,4,6,5,1,0,7,9,8...etc is randomly repeated until the list reaches 1000 numbers] = 37
Is there any obvious ways of tackling this with an algorithm? I don't need actual code, rather examples on ways of thinking in words, and then the names or links to code that are "thinking" in the suggested way.
My first thought was to start from the right and try to get to 37 with the numbers that appears before the first 0 appears, and then multiply the first 0 with the digit to the left of it:
1. My list with numbers:
[...lots of numbers, 0,3,1,9,5,5] = 37
2. Split the list to:
[...lots of numbers,0]
and
[3,1,9,5,5]
3.
- Multiply the last zero with the left number of the first list above to eliminate all those numbers and make it easier to calculate.
- Try different operators in different combinations between the numbers in the second list: [3,6,5,5] until I get 37
Ex:
5*5 = 25 (remember 25 if not same as the target, don't use combo again)
25+9 = 34 (remember 34 if not same as the target, don't use combo again)
34-1 = 33 (remember 34 if not same the target, don't use combo again)
34+3 = 37
I suppose I might need to use a recursive function for it to remember how near you've come the target for every calculation but not sure if above even is an established strategy.
1 - This might not be possible - if you have only one kind of non null number and/or 0s there is no way you can reach 37 - it is a prime number.
2 - More generally if the gcd of your non null numbers is greater than 1 (ex : 2,4 and 6 which gcd is 2) you can't solve you problem
3 - To the contrary if the gcd is 1, then you can find linear comnbinations of these numbers so that X1.a1+X2.a2++++Xn.an=1 . If you have "enough" such numbers multply everything by 37 and you can find your target.

Finding second largest element in sliding window

So given an array and a window size, I need to find the second largest in every window. Brute force solution is pretty simple, but I want to find an efficient solution using dynamic programming
The brute force solution times out when I try it for big arrays, so I need to find a better solution. My solution was to find the second greatest in each sliding window by sorting them and getting the second element, I understand that some data structures can sort faster, but I would like to know if there are better ways.
There are many ways that you can solve this problem. Here are a couple of options. In what follows, I'm going to let n denote the number of elements in the input array and w be the window size.
Option 1: A simple, O(n log w)-time algorithm
One option would be to maintain a balanced binary search tree containing all the elements in the current window, including duplicates. Inserting something into this BST would take time O(log w) because there are only w total elements in the window, and removing an element would also take time O(log w) for the same reason. This means that sliding the window over by one position takes time O(log w).
To find the second-largest element in the window, you'd just need to apply a standard algorithm for finding the second-largest element in a BST, which takes time O(log w) in a BST with w elements.
The advantage of this approach is that in most programming languages, it'll be fairly simple to code this one up. It also leverages a bunch of well-known standard techniques. The disadvantage is that the runtime isn't optimal, and we can improve upon it.
Option 2: An O(n) prefix/suffix algorithm
Here's a linear-time solution that's relatively straightforward to implement. At a high level, the solution works by splitting the array into a series of blocks, each of which has size w. For example, consider the following array:
31 41 59 26 53 58 97 93 23 84 62 64 33 83 27 95 02 88 41 97
Imagine that w = 5. We'll split the array into blocks of size 5, as shown here:
31 41 59 26 53 | 58 97 93 23 84 | 62 64 33 83 27 | 95 02 88 41 97
Now, imagine placing a window of length 5 somewhere in this array, as shown here:
31 41 59 26 53 | 58 97 93 23 84 | 62 64 33 83 27 | 95 02 88 41 97
|-----------------|
Notice that this window will always consist of a suffix of one block followed by a prefix of another. This is nice, because it allows us to solve a slightly simpler problem. Imagine that, somehow, we can efficiently determine the two largest values in any prefix or suffix of any block. Then we could find the second-max value in any window as follows:
Figure out which blocks' prefix and suffix the window corresponds to.
Get the top two elements from each of those prefixes and suffixes (or just the top one element, if the window is sufficiently small).
Of those (up to) four values, determine which is the second-largest and return it.
With a little bit of preprocessing, we can indeed set up our windows to answer queries of the form "what are the two largest elements in each suffix?" and "what are the two largest elements in each prefix?" You can kinda sorta think of this as a dynamic programming problem, set up as follows:
For any prefix/suffix of length one, store the single value in that prefix/suffix.
For any prefix/suffix of length two, the top two values are the two elements themselves.
For any longer prefix or suffix, that prefix or suffix can be formed by extending a smaller prefix or suffix by a single element. To determine the top two elements of that longer prefix/suffix, compare the element used to extend the range to the top two elements and select the top two out of that range.
Notice that filling in each prefix/suffix's top two values takes time O(1). This means that we can fill in any window in time O(w), since there are w entries to fill in. Moreover, since there are O(n / w) total windows, the total time required to fill in these entries is O(n), so our overall algorithm runs in time O(n).
As for space usage: if you eagerly compute all prefix/suffix values throughout the entire array, you'll need to use space O(n) to hold everything. However, since at any point in time we only care about two windows, you could alternatively only compute the prefixes/suffixes when you need them. That will require only space O(w), which is really, really good!
Option 3: An O(n)-time solution using clever data structures
This last approach turns out to be totally equivalent to the above approach, but frames it differently.
It's possible to build a queue that allows for constant-time querying of its maximum element. The idea behind this queue - beginning with a stack that supports efficient find-max and then using it in the two-stack queue construction - can easily be generalized to build a queue that gives constant-time access to the second-largest element. To do so, you'd just adapt the stack construction to store the top two elements at each point in time, not just the largest element.
If you have a queue like this, the algorithm for finding the second-max value in any window is pretty quick: load the queue up with the first w elements, then repeatedly dequeue an element (shift something out of the window) and enqueue the next element (shift something into the window). Each of these operations takes amortized O(1) time to complete, so this takes time O(n) overall.
Fun fact - if you look at what this queue implementation actually does in this particular use case, you'll find that it's completely equivalent to the above strategy. One stack corresponds to suffixes of the previous block and the other to prefixes of the next block.
This last strategy is my personal favorite, but admittedly that's just my own data structures bias.
Hope this helps!
So just take a data structure as like as set which stores the data orderly.
like if you store 4 2 6 on the set it will store as 2 4 6.
So what will be the algorithm:
Let,
Array = [12,8,10,11,4,5]
window size =4
first window= [12,8,10,11]
set =[8,10,11,12]
How to get the second highest:
- Remove the last element from the set and store in a container. set=[8,10,11],contaniner = 12
- After removing, current last element of the set is the second largest of the current window.
- Again put the removed element stored in the container to the set,set=[8,10,11,12]
Now shift your window,
- delete 12 from the set and add 4.
- Now you will get the new window and set.
- check like the similar process.
Complexity of removing and adding element in a set is about log(n).
One tricks:
If you always want to store the data in decreasing order, then you can store the data by multiplying it by -1. And when you pop up the data, use it by multiplying it by -1.
We can use a double ended queue for an O(n) solution. The front of the queue will have larger (and earlier seen) elements:
0 1 2 3 4 5
{12, 8,10,11, 4, 5}
window size: 3
i queue (stores indexes)
- -----
0 0
1 1,0
2 2,0 (pop 1, then insert 2)
output 10
remove 0 (remove indexes not in
the next window from the front of
the queue.)
3 3 (special case: there's only one
smaller element in queue, which we
need so keep 2 as a temporary variable.)
output 10
4 4,3
output 10
remove 2 from temporary storage
5 5,3 (pop 4, insert 5)
output 5
The "pop" and "remove from front" are while A[queue_back] <= A[i] and while queue_front is outside next window respectively (the complication of only one smaller element left represented in the queue notwithstanding). We output the array element indexed by the second element from the front of the queue (although our front may have a special temporary friend that was once in the front, too; the special friend is dumped as soon as it represents an element that's either outside of the window or smaller than the element indexed by the second queue element from the front). A double ended queue has complexity O(1) to remove from either front or back. We insert in the back only.
Per templatetypedef's request in the comments: "how you determine which queue operations to use?" At every iteration, with index i, before inserting it into the queue, we (1) pop every element from the back of the queue that represents an element in the array smaller than or equal to A[i], and (2) remove every element from the front of the queue that is an index outside the current window. (If during (1), we are left with only one smaller or equal element, we save it as a temporary variable since it is the current second largest.)
There is a relatively simple dunamic programming O(n^2) solution:
Build the classic pyramid structure for aggregate value over a subset (the one where you combine the values from the pairs below to make each step above), where you track the largest 2 values (and their position), then simply keep the largest 2 values of the 4 combined values (which is less in practise due to overlap, use the position to ensure they are actually different). You then just read off the second largest value from the layer with the correct sliding window size.

Double hashing using composite numbers in second hash function

I realize that the best practice is to use the largest prime number (smaller then the size of the array) in the mod function of the second hash function is best practice.
But my question is regarding the use of numbers that are not prime numbers.
I'm not interested in a pseudo-code just the idea behind the concept.
Let's say I have an array m=20, and I have to choose between 6,9,12 and 15 as the values that will be entered in the second hash function. Which of them will give me the best 'spread'?
My first thought is to go for the same idea as choosing a prime number, only slightly modified, which means using the largest number the has the minimum amount of permutations:
6 -> 2,3
9 -> 3,3 = 3
12 -> 2,3,4,6
15 -> 3,5
Right of the bat I can rule 6 (a larger number with the same amount of permutations exists) and 12 (too many permutations) out.
Now the question arises, should I use 9 - has the least amount permutations, or should I choose 15 - although it has more permutations it is much larger the 9 and a lot closer to the size of the array (m=20).
Am I correct in using this approach? or is there a better way of choosing a number, given I can only choose from the numbers stated above?
I have found the answer I was looking for, so I'm leaving the question here with the correct answer in case anyone else ever needs it.
If we are forced to choose a number that is not a prime number as the number to be used in the second hash function (in the mod of that function):
The correct approach is to use the GCD function (Greatest Common Denominator), to find numbers that are "prime with respect to each other". This means that we are looking for any number that its gcd with 20 will result in 1.
In this case:
gcd(20,6)= 2
gcd(20,9)= 1
gcd(20,12)= 3
gcd(20,15)= 5
As we can see, the gcd between 20 and 9 is 1, which means that they have no common factors other than 1. Therefore, 9 is the correct answer.

external sorting: multiway merge

In multiway merge The task is to find the smallest element out of k elements
Solution: priority queues
Idea: Take the smallest elements from the first k runs, store them into main memory in a heap tree.
Then repeatedly output the smallest element from the heap. The smallest element is replaced with the next element from the run from which it came.
When finished with the first set of runs, do the same with the next set of runs.
Assume my main memory of size ( M )less than k, how we can sort the elements, in other words,how multi way merge algorithm merge works if memory size M is less than K
For example if my M = 3 and i have following
Tape1: 8 9 10
Tape2: 11 12 13
Tape3: 14 15 16
Tape4: 4 5 6
My question how muliway merge will work because we will read 8, 11, 14 and build priority queue, we place 8 to output tape and then forward Tape1, i am not getting when Tape4 is read and how we will compare with already written to output tape?
Thanks!
It won't work. You must choose a k small enough for available memory.
In this case, you could do a 3-way merge of the first 3 tapes, then a 2-way merge between the result of that and the one remaining tape. Or you could do 3 2-way merges (two pairs of tapes, then combine the results), which is simpler to implement but does more tape access.
In theory you could abandon the priority queue. Then you wouldn't need to store k elements in memory, but you would frequently need to look at the next element on all k tapes in order to find the smallest.

Resources