Reordering items with multiple order criteria - sorting

Scenario:
list of photos
every photo has the following properties
id
sequence_number
main_photo_bit
the first photo has the main_photo_bit set to 1 (all others are 0)
photos are ordered by sequence_number (which is arbitrary)
the main photo does not necessarily have the lowest sequence_number (before sorting)
See the following table:
id, sequence_number, main_photo_bit
1 10 1
2 5 0
3 20 0
Now you want to change the order by changing the sequence number and main photo bit.
Requirements after sorting:
the sequence_number of the first photo is not changed
the sequence_number of the first photo is the lowest
as less changes as possible
Examples:
Example #1 (second photo goes to the first position):
id, sequence_number, main_photo_bit
2 10 1
1 15 0
3 20 0
This is what happened:
id 1: new sequence_number and main_photo_bit set to 0
id 2: old first photo (id 2) sequence_number and main_photo_bit set to 1
id 3: nothing happens
Example #2 (third photo to first position):
id, sequence_number, main_photo_bit
3 10 1
1 20 0
2 30 0
This is what happened:
id 1: new sequence_number bigger than first photo and main_photo_bit to 0
id 2: new sequence_number bigger than newly generated second sequence_number
id 3: old first photo sequence_number and main_photo_bit set to 1
What is the best approach to calculate the steps needed to save the new order?
Edit:
The reason that I want as less updates as possible is because I want to sync it to an external service, which is a quite costly operation.
I already got a working prototype of the algorithm, but it fails in some edge cases. So instead of patching it up (which might work -- but it will become even more complex than it is already), I want to know if there are other (better) ways to do it.
In my version (in short) it orders the photos (changing sequence_number's), and swaps the main_photo_bit, but it isn't sufficient to solve every scenario.

From what I understood, a good solution would not only minimize changes (since updating is the costly operation), but also try to minimize future changes, as more and more photos are reordered. I'd start by adding a temporary field dirty, to indicate if the row must change or not:
id, sequence_number, main_photo_bit, dirty
1 10 1 false
2 5 0 false
3 20 0 false
4 30 0 false
5 31 0 false
6 33 0 false
If there are rows which sequence_number is smaller than the first, they will surely have to change (either to get a higher number, or to become the first). Let's mark them as dirty:
id, sequence_number, main_photo_bit, dirty
2 5 0 true
(skip this step if it's not really important that the first has the lowest sequence_number)
Now let's see the list of photos, as they should be in the result (as per the question, only one photo changed places, from anywhere to anywhere). Dirty ones in bold:
[1, 2, 3, 4, 5, 6] # Original ordering
[2, 1, 3, 4, 5, 6] # Example 1: 2nd to 1st place
[3, 1, 2, 4, 5, 6] # Example 2: 3rd to 1st place
[1, 2, 4, 3, 5, 6] # Example 3: 3rd to 4th place
[1, 3, 2, 4, 5, 6] # Example 4: 3rd to 2nd place
The first thing to do is ensure the first element has the lowest sequence_number. If it hasn't changed places, then it has by definition, otherwise the old first should be marked as dirty, have its main_photo_bit cleared, and the new one should receive those values to itself.
At this point, the first element should have a fixed sequence_number, and every dirty element can have its value changed at will (since it will have to change anyway, so it's better to change for an useful value). Before proceeding, we must ensure that it's possible to solve it with only changing the dirty rows, or if more rows will have to be dirtied as well. This is simply a matter of determining if the interval between every pair of clean rows is big enough to fit the number of dirty rows between them:
[10, D, 20, 30, 31, 33] # Original ordering (the first is dirty, but fixed)
[10, D, 20, 30, 31, 33] # Example 1: 2nd to 1st place (ok: 10 < ? < 20)
[10, D, D, 30, 31, 33] # Example 2: 3rd to 1st place (ok: 10 < ? < ? < 30)
[10, D, 30, D, 31, 33] # Example 3: 3rd to 4th place (NOT OK: 30 < ? < 31)
[10, D, 30, D, D, 33] # must mark 5th as dirty too (ok: 30 < ? < ? < 33)
[10, D, D, 30, 31, 33] # Example 4: 3rd to 2nd place (ok)
Now it's just a matter of assigning new sequence_numbers to the dirty rows. A naïve solution would be to just increment the previous one, but a better approach would be setting them as equally spaced as possible. This way, there are better odds that a future reorder would require less changes (in other words, to avoid problems like Example 3, where more rows than necessary had to be updated since some sequence_numbers were too close to each other):
[10, 15, 20, 30, 31, 33] # Example 1: 2nd to 1st place
[10, 16, 23, 30, 31, 33] # Example 2: 3rd to 1st place
[10, 20, 30, 31, 32, 33] # Example 3: 3rd to 4th place
[10, 16, 23, 30, 31, 33] # Example 4: 3rd to 2nd place
Bonus: if you really want to push the solution to its limits, do the computation twice - one moving the photo, other having it fixed and moving the surrounding photos - and see which one resulted in less changes. Take example 3A, where instead of "3rd to 4th place" we treat it as "4th to 3rd place" (same sorting results, but different changes):
[1, 2, 4, 3, 5, 6] # Example 3A: 4th to 3rd place
[10, D, D, 20, 31, 33] # (ok: 10 < ? < ? < 20)
[10, 13, 16, 20, 31, 33] # One less change
In most cases it can be done (ex.: 2nd to 4th position == 3rd/4th to 2nd/3rd position), whether or not the added complexity is worth the small gain, it's up to you to decide.

Use a linked list instead of sequence numbers. Then you can remove a picture from anywhere in the list and reinsert it anywhere in the list, and you only need to change 3 lines in your database file. Main photo bit should be unneccessary, the first photo being implicitly defined by not having any pointers to it.
id next
1 3
2 1
3
the order is: 2, 1, 3
user moves picture 3 to position 1:
id next
1
2 1
3 2
new order is: 3, 2, 1

Related

Convert the permutation sequence A to B by selecting a set in A then reversing that set and inserting that set at the beginning of A

Given the sequence A and B consisting of N numbers that are permutations of 1,2,3,...,N. At each step, you choose a set S in sequence A in order from left to right (the numbers selected will be removed from A), then reverse S and add all elements in S to the beginning of the sequence A. Find a way to transform A into B in log2(n) steps.
Input: N <= 10^4 (number of elements of sequence A, B) and 2 permutations sequence A, B.
Output: K (Number of steps to convert A to B). The next K lines are the set of numbers S selected at each step.
Example:
Input:
5 // N
5 4 3 2 1 // A sequence
2 5 1 3 4 // B sequence
Output:
2
4 3 1
5 2
Step 0: S = {}, A = {5, 4, 3, 2, 1}
Step 1: S = {4, 3, 1}, A = {5, 2}. Then reverse S => S = {1, 3, 4}. Insert S to beginning of A => A = {1, 3, 4, 5, 2}
Step 2: S = {5, 2}, A = {1, 3, 4}. Then reverse S => S = {2, 5}. Insert S to beginning of A => A = {2, 5, 1, 3, 4}
My solution is to use backtracking to consider all possible choices of S in log2(n) steps. However, N is too large so is there a better approach? Thank you.
For each operation of combined selecting/removing/prepending, you're effectively sorting the elements relative to a "pivot", and preserving order. With this in mind, you can repeatedly "sort" the items in backwards order (by that I mean, you sort on the most significant bit last), to achieve a true sort.
For an explicit example, lets take an example sequence 7 3 1 8. Rewrite the terms with their respective positions in the final sorted list (which would be 1 3 7 8), to get 2 1 0 3.
7 -> 2 // 7 is at index 2 in the sorted array
3 -> 1 // 3 is at index 0 in the sorted array
1 -> 0 // so on
8 -> 3
This new array is equivalent to the original- we are just using indices to refer to the values indirectly (if you squint hard enough, we're kinda rewriting the unsorted list as pointers to the sorted list, rather than values).
Now, lets write these new values in binary:
2 10
1 01
0 00
3 11
If we were to sort this list, we'd first sort by the MSB (most significant bit) and then tiebreak only where necessary on the subsequent bit(s) until we're at the LSB (least significant bit). Equivalently, we can sort by the LSB first, and then sort all values on the next most significant bit, and continuing in this fashion until we're at the MSB. This will work, and correctly sort the list, as long as the sort is stable, that is- it doesn't change the order of elements that are considered equal.
Let's work this out by example: if we sorted these by the LSB, we'd get
2 10
0 00
1 01
3 11
-and then following that up with a sort on the MSB (but no tie-breaking logic this time), we'd get:
0 00
1 01
2 10
3 11
-which is the correct, sorted result.
Remember the "pivot" sorting note at the beginning? This is where we use that insight. We're going to take this transformed list 2 1 0 3, and sort it bit by bit, from the LSB to the MSB, with no tie-breaking. And to do so, we're going to pivot on the criteria <= 0.
This is effectively what we just did in our last example, so in the name of space I won't write it out again, but have a look again at what we did in each step. We took the elements with the bits we were checking that were equal to 0, and moved them to the beginning. First, we moved 2 (10) and 0 (00) to the beginning, and then the next iteration we moved 0 (00) and 1 (01) to the beginning. This is exactly what operation your challenge permits you to do.
Additionally, because our numbers are reduced to their indices, the max value is len(array)-1, and the number of bits is log2() of that, so overall we'll only need to do log2(n) steps, just as your problem statement asks.
Now, what does this look like in actual code?
from itertools import product
from math import log2, ceil
nums = [5, 9, 1, 3, 2, 7]
size = ceil(log2(len(nums)-1))
bit_table = list(product([0, 1], repeat=size))
idx_table = {x: i for i, x in enumerate(sorted(nums))}
for bit_idx in range(size)[::-1]:
subset_vals = [x for x in nums if bit_table[idx_table[x]][bit_idx] == 0]
nums.sort(key=lambda x: bit_table[idx_table[x]][bit_idx])
print(" ".join(map(str, subset_vals)))
You can of course use bitwise operators to accomplish the bit magic ((thing << bit_idx) & 1) if you want, and you could del slices of the list + prepend instead of .sort()ing, this is just a proof-of-concept to show that it actually works. The actual output being:
1 3 7
1 7 9 2
1 2 3 5

Efficient algorithm for finding the right elements combinations

The problem is the following:
1) Total load is given as input
2) Number of steps over which the load is divided is also given as input
3) Each step can have different discrete number of elements, which is multiple of 3 for example (i.e. 3, 6, 9, 12, 15 elements ...).
4) Elements are given as input.
5) Acceptable solutions are within a certain range "EPSILON" from the total load (equal to total load or greater but within certain margin, for example up to +2)
Example:
Total load: 50
Number of steps: 4
Allowed elements that can be used are: 0.5, 1, 1.5, 2.5, 3, 4
Acceptable margin: +2 (i.e. total load between 50 and 52).
Example of solutions are:
For simplicity here, each step has uniform elements, although we can have different elements in the same step (but should be grouped into 3, i.e. we can have 3 elements of 1, and 3 other elements of 2, in the same step, so total of 9).
Solution 1: total of 51
Step 1: 3 Elements of 4 (So total of 12), (this step can be for example 3 elements of 3, and 3 elements of 1, i.e. 3 x 3 + 3 x 1).
Step 2: 3 Elements of 4 (total of 12),
Step 3: 9 Elements of 1.5 (total of 13.5),
Step 4: 9 Elements of 1.5 (total of 13.5),
Solution 2: total of 51
Step 1: 3 Elements of 4 (total of 12)
Step 2: 3 Elements of 4 (total of 12)
Step 3: 6 Elements of 2 (total of 12)
Step 4: 15 Elements of 1 (total of 15)
The code that I used takes the above input, and writes another code depending on the number of steps.
The second code basically loops over the number of steps (loops inside each other's) and checks for all the possible elements combinations.
Example of loops for 2 steps solution:
Code:
For NumberofElementsA = 3 To 18 Step 3
'''''18 here is the maximum number of elements per step, since I cannot let it go to infinity, so I need to define a maximum for elemnt
For NumberofElementsB = 3 To 18 Step 3
For AllowedElementsA = 1 To 6
For AllowedElementsB = AllowedElementsA To 6
''''Allowed elements in this example were 6: [0.5, 1, 1.5, 2.5, 3, 4]
LoadDifference = -TotalLoad + NumberofElementsA * ElementsArray(AllowedElementsA) + NumberofElementsB * ElementsArray(AllowedElementsB)
''''basically it just multiplies the number of elements (here 3, 6, 9, ... to 18) to the value of the element (0.5, 1, 1.5, 2.5, 3, 4) in each loop and subtracts the total load.
If LoadDifference <= 2 And LoadDifference >= 0
'''Solution OK
End If
Next AllowedElementsB
Next AllowedElementsA
Next NumberofElementsB
Next NumberofElementsA
So basically the code loops over all the possible number of elements and possible elements values, and checks each result.
Is there an algorithm that solves in a more efficient way the above problem ? Other than looping over all possible outcomes.
Since you're restricted to groups of 3, this transforms immediately to a problem with all weights tripled:
1.5, 3, 4.5, 7.5, 9, 12
Your range is a target value +2, or within 1 either way from the midpoint of that range (51 +- 1).
Since you've listed no requirement on balancing step loads, this is now an instance of the target sum problem -- with a little processing before and after the central solution.

algorithm to maximize sum of unique set with multiple contributors

Im looking for an approach to maximize the value of a common set comprised of contributions from multiple sources with a fixed number of contributions from each.
Example problem: 3 people each have a hand of cards. Each hand contains a unique set, but the 3 sets may overlap. Each player can pick three cards to contribute to the middle. How can I maximize the sum of the 9 contributed cards where
each player contributes exactly 3 cards
all 9 cards are unique (when possible)
solution which can scale in the range of 200 possible "cards", 40
contributors and 6 contributions each.
Integer-programming sounds like a viable approach. Without guaranteeing it, this problem also feels NP-hard, meaning: there is no general algorithm beating brute-force (without assumptions about the possible input; IP-solvers actually do assume a lot / are tuned for real-world problems).
(Alternative off-the-shelf approaches: Constraint-programming and SAT-solvers; CP: easy to formulate, faster in regards to combinatorial-search but less good using branch-and-bound style in terms of maximization; SAT: hard to formulate as counters need to build, very fast combinatorial-search and again: no concept of maximization: needs decision-problem like transform).
Here is some python-based complete example solving this problem (in the hard-constraint version; each player has to play all his cards). As i'm using cvxpy, the code is quite in math-style and should be easy to read despite not knowing python or the lib!
Before presenting the code, some remarks:
General remarks:
The IP-approach is heavily dependent on the underlying solver!
Commercial solvers (Gurobi and co.) are the best
Good open-source solvers: CBC, GLPK, lpsolve
The default-solver in cvxpy is not ready for this (when increasing the problem)!
In my experiment, with my data, commercial solvers scale very well!
A popular commercial-solver needs a few seconds for:
N_PLAYERS = 40 , CARD_RANGE = (0, 400) , N_CARDS = 200 , N_PLAY = 6
Using cvxpy is not best-practice as it's created for very different use-cases and this induces some penalty in times of model-creation time
I'm using it because i'm familiar with it and i love it
Improvements: Problem
We are solving the each-player-plays-exactly-n_cards here
Sometimes there is no solution
Your model-description does not formally describe how to handle this
General idea to improve the code:
bigM-style penalty-based objective: e.g. Maximize(n_unique * bigM + classic_score)
(where bigM is a very big number)
Improvements: Performance
We are building all those pairwise-conflicts and use a classic not-both constraint
The number of conflicts, depending on the task can grow a lot
Improvement idea (too lazy to add):
Calculate the set of maximal cliques and add these as constraints
Will be much more powerful, but:
For general conflict-graphs this problem should be NP-hard too, so an approximation algorithm needs to be used
(opposed to other applications like time-invervals, where this set can be calculated in polynomial time as the graphs will be chordal)
Code:
import numpy as np
import cvxpy as cvx
np.random.seed(1)
""" Random problem """
N_PLAYERS = 5
CARD_RANGE = (0, 20)
N_CARDS = 10
N_PLAY = 3
card_set = np.arange(*CARD_RANGE)
p = np.empty(shape=(N_PLAYERS, N_CARDS), dtype=int)
for player in range(N_PLAYERS):
p[player] = np.random.choice(card_set, size=N_CARDS, replace=False)
print('Players and their cards')
print(p)
""" Preprocessing:
Conflict-constraints
-> if p[i, j] == p[x, y] => don't allow both
Could be made more efficient
"""
conflicts = []
for p_a in range(N_PLAYERS):
for c_a in range(N_CARDS):
for p_b in range(p_a + 1, N_PLAYERS): # sym-reduction
if p_b != p_a:
for c_b in range(N_CARDS):
if p[p_a, c_a] == p[p_b, c_b]:
conflicts.append( ((p_a, c_a), (p_b, c_b)) )
# print(conflicts) # debug
""" Solve """
# Decision-vars
x = cvx.Bool(N_PLAYERS, N_CARDS)
# Constraints
constraints = []
# -> Conflicts
for (p_a, c_a), (p_b, c_b) in conflicts:
# don't allow both -> linearized
constraints.append(x[p_a, c_a] + x[p_b, c_b] <= 1)
# -> N to play
constraints.append(cvx.sum_entries(x, axis=1) == N_PLAY)
# Objective
objective = cvx.sum_entries(cvx.mul_elemwise(p.flatten(order='F'), cvx.vec(x))) # 2d -> 1d flattening
# ouch -> C vs. Fortran storage
# print(objective) # debug
# Problem
problem = cvx.Problem(cvx.Maximize(objective), constraints)
problem.solve(verbose=False)
print('MIP solution')
print(problem.status)
print(problem.value)
print(np.round(x.T.value))
sol = x.value
nnz = np.where(abs(sol - 1) <= 0.01) # being careful with fp-math
sol_p = p[nnz]
assert sol_p.shape[0] == N_PLAYERS * N_PLAY
""" Output solution """
for player in range(N_PLAYERS):
print('player: ', player, 'with cards: ', p[player, :])
print(' plays: ', sol_p[player*N_PLAY:player*N_PLAY+N_PLAY])
Output:
Players and their cards
[[ 3 16 6 10 2 14 4 17 7 1]
[15 8 16 3 19 17 5 6 0 12]
[ 4 2 18 12 11 19 5 6 14 7]
[10 14 5 6 18 1 8 7 19 15]
[15 17 1 16 14 13 18 3 12 9]]
MIP solution
optimal
180.00000005500087
[[ 0. 0. 0. 0. 0.]
[ 0. 1. 0. 1. 0.]
[ 1. 0. 0. -0. -0.]
[ 1. -0. 1. 0. 1.]
[ 0. 1. 1. 1. 0.]
[ 0. 1. 0. -0. 1.]
[ 0. -0. 1. 0. 0.]
[ 0. 0. 0. 0. -0.]
[ 1. -0. 0. 0. 0.]
[ 0. 0. 0. 1. 1.]]
player: 0 with cards: [ 3 16 6 10 2 14 4 17 7 1]
plays: [ 6 10 7]
player: 1 with cards: [15 8 16 3 19 17 5 6 0 12]
plays: [ 8 19 17]
player: 2 with cards: [ 4 2 18 12 11 19 5 6 14 7]
plays: [12 11 5]
player: 3 with cards: [10 14 5 6 18 1 8 7 19 15]
plays: [14 18 15]
player: 4 with cards: [15 17 1 16 14 13 18 3 12 9]
plays: [16 13 9]
Looks like a packing problem, where you want to pack 3 disjoint subsets of your original sets, each of size 3, and maximize the sum. You can formulate it as an ILP. Without loss of generality, we can assume the cards represent natural numbers ranging from 1 to N.
Let a_i in {0,1} indicate if player A plays card with value i, where i is in {1,...,N}. Notice that if player A doesn't have card i in his hand, a_i is set to 0 in the beginning.
Similarly, define b_i and c_i variables for players B and C.
Also, similarly, let m_i in {0,1} indicate if card i will appear in the middle, i.e., one of the players will play a card with value i.
Now you can say:
Maximize Sum(m_i . i), subject to:
For each i in {1,...N,}:
a_i, b_i, c_i, m_i are in {0, 1}
m_i = a_i + b_i + c_i
Sum(a_i) = 3, Sum(b_i) = 3, Sum(c_i) = 3
Discussion
Notice that constraint 1 and 2, force the uniqueness of each card in the middle.
I'm not sure how big of a problem can be handled by commercial or non-commercial solvers with this program, but notice that this is really a binary linear program, which might be simpler to solve than the general ILP, so it might be worth trying for the size you are looking for.
Sort each hand, dropping duplicate values. Delete anything past the 10-th highest card of any hand (3 hands * 3 cards/hand, plus 1): nobody can contribute a card that low.
For accounting purposes, make a directory by card value, showing which hands hold each value. For instance, given players A, B, C and these hands
A [1, 1, 1, 6, 4, 12, 7, 11, 13, 13, 9, 2, 2]
B [13, 2, 3, 1, 5, 5, 8, 9, 11, 10, 5, 5, 9]
C [13, 12, 11, 10, 6, 7, 2, 4, 4, 12, 3, 10, 8]
We would sort and de-dup the hands. 2 is the 10th-highest card of hand c, so we drop all values 2 and below. Then build the directory
A [13, 12, 11, 9, 7, 6]
B [13, 11, 10, 9, 8, 5, 3]
C [13, 12, 11, 10, 8, 7, 6, 4, 3]
Directory:
13 A B C
12 A C
11 A B C
10 B C
9 A B
8 B C
7 A B
6 A C
5 B
4 C
3 B C
Now, you need to implement a backtracking algorithm to choose cards in some order, get the sum of that order, and compare with the best so far. I suggest that you iterate through the directory, choosing a hand from which to obtain the highest remaining card, backtracking when you run out of contributors entirely, or when you get 9 cards.
I recommend that you maintain a few parameters to allow you to prune the investigation, especially when you get into the lower values.
Make a maximum possible value, the sum of the top 9 values in the directory. If you hit this value, stop immediately, as you've found an optimum solution.
Make a high starting target: cycle through the hands in sequence, taking the highest usable card remaining in the hand. In this case, cycling A-B-C, we would have
13, 11, 12, 9, 10, 8, 7, 5, 6 => 81
// Note: because of the values I picked
// this happens to provide an optimum solution.
// It will do so far a lot of the bridge-hand problem space.
Keep count of how many cards have been contributed by each hand; when one has given its 3 cards, disqualify it in some way: have a check in the choice code, or delete it from the local copy of the directory.
As you walk down the choice list, prune the search any time the remaining cards are insufficient to reach the best-so-far total. For instance, if you have a total of 71 after 7 cards, and the highest remaining card is 5, stop: you can't get to 81 with 5+4.
Does that get you moving?

Suggestions for optimizing length of bins for a time period

I have an optimisation problem where I need to optimize the lengths of a fixed number of bins over a known period of time. The bins should contain minimal overlapping items with the same tag (see definition of items and tags later).
If the problem can be solved heuristically that is fine, the exact optimum is not important.
I was wondering if anybody had any suggestions as to approaches to try out for this or at had any ideas as to what the name of the problem would be.
The problem
Lets say we have n number of items that have two attributes: tag and time range.
For an example we have the following items:
(tag: time range (s))
(1: 0, 2)
(1: 4, 5)
(1: 7, 8)
(1: 9, 15)
(2: 0, 5)
(2: 7, 11)
(2: 14, 20)
(3: 4, 6)
(3: 7, 11)
(4: 5, 15)
When plotted this would look as follows:
Lets say we have to bin this 20 second period of time into 4 groups. We could do this by having 4 groups of length 5.
And would look something like this:
The number of overlapping items with the same tag would be
Group 1: 1 (tag 1)
Group 2: 2 (tag 1 and tag 3)
Group 3: 2 (tag 2)
Group 4: 0
Total overlapping items: 5
Another grouping selection for 4 groups would then be of lengths 4, 3, 2 and 11 seconds.
The number of overlapping items with the same tag would be :
Group 1: 0
Group 2: 0
Group 3: 0
Group 4: 1 (tag 2)
Attempts to solve (brute force)
I can find the optimum solution by binning the whole period of time into small segments (say 1 seconds, for the above example there would be 20 bins).
I can then find all the integer compositions for the integer 20 that use 4 components. e.g.
This would provide 127 different compositions
(1, 1, 4, 14), (9, 5, 5, 1), (1, 4, 4, 11), (13, 3, 3, 1), (3, 4, 4, 9), (10, 5, 4, 1), (7, 6, 6, 1), (1, 3, 5, 11), (2, 4, 4, 10) ......
For (1, 1, 4, 14) the grouping would be 4 groups of 1, 1, 4 and 14 seconds.
I then find the composition with the best score (smallest number of overlapping tags).
The problem with this approach is that it can only be done on relatively small numbers as the number of compositions of an integer gets incredibly large when the size of the integer increases.
Therefore, if my data is 1000 seconds and I have to put bins of size 1 second the run time would be too long.
Attempts to solve (heuristically)
I have tried using a genetic algorithm type approach.
Where chromosomes are a composition of lengths which are created randomly and genes are the individual lengths of each group. Due to the nature of the data I am struggling to do any meaningful crossover/mutations though.
Does anyone have any suggestions?

Binary Search in C++ with array

Here is an array with exactly 15 elements:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Suppose that we are doing a binary search for an element. Indicate any elements that will be found by examining two or fewer numbers from the array.
What I've got: as we are doing binary search, so the number found by only one comparison will be 7th element = 7. For two comparison, this leads to second division of array. That is, number found can be either 3 or 11.
Am I right or not?
You are almost right, the first number is not seven but eight.
The others 2 will then be 4 and 12.
The correct answer would be 4, 8, 12
`I found the answer to be 8 that is the 7th element, the other elements found were 3.5th and 10.5th element of the sorted array. So, the next two numbers delved are 4 and 11.
explanation on how i got the answers.
given array is 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15
head=1
tail=15
middle= 0+14/2=7th element **0 is the index value of 1 and 14 is of 15**
middle value turns to be 8 as it is the 7th element.
solving value for first half
head=1
tail=8
middle= 0+7/2=3.5 or 3rd element **0 is the index value of 1 and 7 is of 8**
middle value now turns to be 4 as it is the 3rd element.
solving value for second half
head=8
tail=15
middle= 7+14/2=10.5 or 10th element **7 is the index value of 8 and 14 is
of 15**
middle value now turns to be 11 as it is the 10th element of the array`
Blockquote

Resources