Mathematical Set In Set-Builder And Set-Roster Notation - set

All subsets of Z that have an even number of members. For example, the set{1,2,3,4}would be in this set, but{1,2,3}would not be in the set.
I thought that it would be something like {A ⊆ ℤ | |A| / 2 ∈ ℤ} in set-builder and {infinity?} in set-roster. I believe it to be infinity because you could have 100, 1000, 10000, etc, elements and they would have an even amount of elements. Such as, A = {1, 2, 80, -7, -5, 43}. The cardinality of A is 6 but I could put in two more arbitrary elements and it would be at 8 and therefore still an even amount of members.
Can anybody please explain how I am thinking incorrectly? I think the answer is infinity but I do not know how to express infinity in set-roster notation.
Thanks!

Related

Maximum sum of n intervals in a sequence

I'm doing some programming "kata" which are skill building exercises for programming (and martial arts). I want to learn how to solve for algorithms like these in shorter amounts of time, so I need to develop my knowledge of the patterns. Eventually I want to solve in increasingly efficient time complexities (O(n), O(n^2), etc), but for now I'm fine with figuring out the solution with any efficiency to start.
The problem:
Given arr[10] = [4, 5, 0, 2, 5, 6, 4, 0, 3, 5]
Given various segment lengths, for example one 3-length segment, and two 2-length segments, find the optimal position of (or maximum sum contained by) the segments without overlapping the segments.
For example, solution to this array and these segments is 2, because:
{4 5} 0 2 {5 6 4} 0 {3 5}
What I have tried before posting on stackoverflow.com:
I've read through:
Algorithm to find maximum coverage of non-overlapping sequences. (I.e., the Weighted Interval Scheduling Prob.)
algorithm to find longest non-overlapping sequences
and I've watched MIT opencourseware and read about general steps for solving complex problems with dynamic programming, and completed a dynamic programming tutorial for finding Fibonacci numbers with memoization. I thought I could apply memoization to this problem, but I haven't found a way yet.
The theme of dynamic programming is to break the problem down into sub-problems which can be iterated to find the optimal solution.
What I have come up with (in an OO way) is
foreach (segment) {
- find the greatest sum interval with length of this segment
This produces incorrect results, because not always will the segments fit with this approach. For example:
Given arr[7] = [0, 3, 5, 5, 5, 1, 0] and two 3-length segments,
The first segment will take 5, 5, 5, leaving no room for the second segment. Ideally I should memoize this scenario and try the algorithm again, this time avoiding 5, 5, 5, as a first pick. Is this the right path?
How can I approach this in a "dynamic programming" way?
If you place the first segment, you get two smaller sub-arrays: placing one or both of the two remaining segments into one of these sub-arrays is a sub-problem of just the same form as the original one.
So this suggests a recursion: you place the first segment, then try out the various combinations of assigning remaining segments to sub-arrays, and maximize over those combinations. Then you memoize: the sub-problems all take an array and a list of segment sizes, just like the original problem.
I'm not sure this is the best algorithm but it is the one suggested by a "direct" dynamic programming approach.
EDIT: In more detail:
The arguments to the valuation function should have two parts: one is a pair of numbers which represent the sub-array being analysed (initially [0,6] in this example) and the second is a multi-set of numbers representing the lengths of the segments to be allocated ({3,3} in this example). Then in pseudo-code you do something like this:
valuation( array_ends, the_segments):
if sum of the_segments > array_ends[1] - array_ends[0]:
return -infinity
segment_length = length of chosen segment from the_segments
remaining_segments = the_segments with chosen segment removed
best_option = 0
for segment_placement = array_ends[0] to array_ends[1] - segment_length:
value1 = value of placing the chosen segment at segment_placement
new_array1 = [array_ends[0],segment_placement]
new_array2 = [segment_placement + segment_length,array_ends[1]]
for each partition of remaining segments into seg1 and seg2:
sub_value1 = valuation( new_array1, seg1)
sub_value2 = valuation( new_array2, seg2)
if value1 + sub_value1 + sub_value2 > best_option:
best_option = value1 + sub_value1 + sub_value2
return best_option
This code (modulo off by one errors and typos) calculates the valuation but it calls the valuation function more than once with the same arguments. So the idea of the memoization is to cache those results and avoid re-traversing equivalent parts of the tree. So we can do this just by wrapping the valuation function:
memoized_valuation(args):
if args in memo_dictionary:
return memo_dictionary[args]
else:
result = valuation(args)
memo_dictionary[args] = result
return result
Of course, you need to change the recursive call now to call memoized_valuation.

Importance of order of the operation in backtracking algorithms

Order of operation in each recursive step of a backtracking algorithms are how much important in terms of the efficiency of that particular algorithm?
For Ex.
In the Knight’s Tour problem.
The knight is placed on the first block of an empty board and, moving
according to the rules of chess, must visit each square exactly once.
In each step there are 8 possible (in general) ways to move.
int xMove[8] = { 2, 1, -1, -2, -2, -1, 1, 2 };
int yMove[8] = { 1, 2, 2, 1, -1, -2, -2, -1 };
If I change this order like...
int xmove[8] = { -2, -2, 2, 2, -1, -1, 1, 1};
int ymove[8] = { -1, 1,-1, 1, -2, 2, -2, 2};
Now,
for a n*n board
upto n=6
both the operation order does not affect any visible change in the execution time,
But if it is n >= 7
First operation (movement) order's execution time is much less than the later one.
In such cases, it is not feasible to generate all the O(m!) operation order and test the algorithm. So how do I determine the performance of such algorithms on a specific movement order, or rather how could it be possible to reach one (or a set) of operation orders such that the algorithm that is more efficient in terms of execution time.
This is an interesting problem from a Math/CS perspective. There definitely exists a permutation (or set of permutations) that would be most efficient for a given n . I don't know if there is a permutation that is most efficient among all n. I would guess not. There could be a permutation that is better 'on average' (however you define that) across all n.
If I was tasked to find an efficient permutation I might try doing the following: I would generate a fixed number x of randomly generated move orders. Measure their efficiency. For every one of the randomly generated movesets, randomly create a fixed number of permutations that are near the original. Compute their efficiencies. Now you have many more permutations than you started with. Take top x performing ones and repeat. This will provide some locally maxed algorithms, but I don't know if it leads up to the globally maxed algorithm(s).

Sum of Multiples of Numbers Greater Or Equal to Target, Optimization

Given an equation
Like 2(p1) + 3(p2) + 7(p3) >= 257
I need to find all possible combinations of p1, p2, p3
such the above statement is true and the resulting sum ( left hand side of the equation ) is minimal where all xn were known.
I tried looking up algorithms for general cases like
(x1)(p1) + (x2)(p2) + (x3)(p4) + ... + (xn)(pn) >= target
And I came across the Knapsack problem and Subset-Sum algorithm solutions, but they weren't exactly like this problem.
I tried before using an algorithm in Python 3.x that has lower-bound values for pn, but it still runs in O( ridiculous ) time complexity.
Obviously all numbers here are natural numbers, otherwise there would be infinite solutions.
I can see two possible approaches, depending on whether the Pi have to be >= 0. The case with Pi >= 0 is more sensible, so I will consider it first.
Treat this as dynamic programming, where you work from left to right along the equation. Looking at the larger equation in your comment, first of all create a list of the contributions from p0: 0, 5, 10, 15... 190384760, and beside them the value of p0 that produces them: 0, 1, 2, ... 190384760/5.
Now use this table to work out the values of 5p0 + 7p1 possible by combining the first two: 0, 5, 7, 10, 12, 14.... and keep the value of p1 needed to produce them.
Working from right to left you will end up with a table of the values up to just over 190384755 that can be created by positive integer combinations of p0..p8. You obviously only care about the largest one >= 190384755. Consider all possible values of the p8 contribution, subtract these from 190384755, and look in the table for p0..p7 to see which of these are possible. This gives you all possible values of p8, and for each of these you can recursively repeat the process to print out all possible values of p7, and so on repeat the recursion to provide all values of p0..p8 that yields the lowest value just over 190384755. This is very similar to the pseudo-polynomial algorithm for subset sum.
If the Pi can be < 0, then the achievable values are all multiples of the gcd of the Pi, which is very likely to be all integers, and there are an infinite number of solutions for this. If this is really what you want, you can start by reading about the http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm.
Maybe the given example is just a toy case.
If not, exhaustive search is quite feasible: the minimal sum is bounded by 259 (combination 0, 0, 37), and there are less than a half million combinations under this bound.
In addition, if you set two variables, say p2 and p3, such that 3(p2) + 7(p3) < 257, it is an easy matter to find the smallest p1 such that 2(p1) + 3(p2) + 7(p3) >= 257. You will just have to try 3200 (p2, p3) combinations or so.

How to determine best combinations from 2 lists

I'm looking for a way to make the best possible combination of people in groups. Let me sketch the situation.
Say we have persons A, B, C and D. Furthermore we have groups 1, 2, 3, 4 and 5. Both are examples and can be less or more. Each person gives a rating to each other person. So for example A rates B a 3, C a 2, and so on. Each person also rates each group. (Say ratings are 0-5). Now I need some sort of algorithm to distribute these people evenly over the groups while keeping them as happy as possible (as in: They should be in a highrated group, with highrated people). Now I know it's not possible for the people to be in the best group (the one they rated a 5) but I need them to be in the best possible solution for the entire group.
I think this is a difficult question, and I would be happy if someone could direct me to some more information about this types of problems, or help me with the algo I'm looking for.
Thanks!
EDIT:
I see a lot of great answers but this problem is too great for me too solve correctly. However, the answers posted so far give me a great starting point too look further into the subject. Thanks a lot already!
after establishing this is NP-Hard problem, I would suggest as a heuristical solution: Artificial Intelligence tools.
A possible approach is steepest ascent hill climbing [SAHC]
first, we will define our utility function (let it be u). It can be the sum of total happiness in all groups.
next,we define our 'world': S is the group of all possible partitions.
for each legal partition s of S, we define:
next(s)={all possibilities moving one person to a different group}
all we have to do now is run SAHC with random restarts:
1. best<- -INFINITY
2. while there is more time
3. choose a random partition as starting point, denote it as s.
4. NEXT <- next(s)
5. if max{ U(NEXT) } < u(s): //s is the top of the hill
5.1. if u(s) > best: best <- u(s) //if s is better then the previous result - store it.
5.2. go to 2. //restart the hill climbing from a different random point.
6. else:
6.1. s <- max{ NEXT }
6.2. goto 4.
7. return best //when out of time, return the best solution found so far.
It is anytime algorithm, meaning it will get a better result as you give it more time to run, and eventually [at time infinity] it will find the optimal result.
The problem is NP-hard: you can reduce from Maximum Triangle Packing, that is, finding at least k vertex-disjoint triangles in a graph, to the version where there are k groups of size 3, no one cares about which group he is in, and likes everyone for 0 or for 1. So even this very special case is hard.
To solve it, I would try using an ILP: have binary variables g_ik indicating that person i is in group k, with constraints to ensure a person is only in one group and a group has an appropriate size. Further, binary variables t_ijk that indicate that persons i and j are together in group k (ensured by t_ijk <= 0.5 g_ik + 0.5 g_jk) and binary variables t_ij that indicate that i and j are together in any group (ensured by t_ij <= sum_k t_ijk). You can then maximize the happiness function under these constraints.
This ILP has very many variables, but modern solvers are pretty good and this approach is very easy to implement.
This is an example of an optimization problem. It is a very well
studied type of problems with very good methods to solve them. Read
Programming Collective Intelligence which explains it much better
than me.
Basically, there are three parts to any kind of optimization problem.
The input to the problem solving function.
The solution outputted by the problem solving function.
A scoring function that evaluates how optimal the solution is by
scoring it.
Now the problem can be stated as finding the solution that produces
the highest score. To do that, you first need to come up with a format
to represent a possible solution that the scoring function can then
score. Assuming 6 persons (0-5) and 3 groups (0-2), this python data structure
would work and would be a possible solution:
output = [
[0, 1],
[2, 3],
[4, 5]
]
Person 0 and 1 is put in group 0, person 2 and 3 in group 1 and so
on. To score this solution, we need to know the input and the rules for
calculating the output. The input could be represented by this data
structure:
input = [
[0, 4, 1, 3, 4, 1, 3, 1, 3],
[5, 0, 1, 2, 1, 5, 5, 2, 4],
[4, 1, 0, 1, 3, 2, 1, 1, 1],
[2, 4, 1, 0, 5, 4, 2, 3, 4],
[5, 5, 5, 5, 0, 5, 5, 5, 5],
[1, 2, 1, 4, 3, 0, 4, 5, 1]
]
Each list in the list represents the rating the person gave. For
example, in the first row, the person 0 gave rating 0 to person 0 (you
can't rate yourself), 4 to person 1, 1 to person 2, 3 to 3, 4 to 4 and
1 to person 5. Then he or she rated the groups 0-2 3, 1 and 3
respectively.
So above is an example of a valid solution to the given input. How do
we score it? That's not specified in the question, only that the
"best" combination is desired therefore I'll arbitrarily decide that
the score for a solution is the sum of each persons happiness. Each
persons happiness is determined by adding his or her rating of the
group with the average of the rating for each person in the group,
excluding the person itself.
Here is the scoring function:
N_GROUPS = 3
N_PERSONS = 6
def score_solution(input, output):
tot_score = 0
for person, ratings in enumerate(input):
# Check what group the person is a member of.
for group, members in enumerate(output):
if person in members:
# Check what rating person gave the group.
group_rating = ratings[N_PERSONS + group]
# Check what rating the person gave the others.
others = list(members)
others.remove(person)
if not others:
# protect against zero division
person_rating = 0
else:
person_ratings = [ratings[o] for o in others]
person_rating = sum(person_ratings) / float(len(person_ratings))
tot_score += group_rating + person_rating
return tot_score
It should return a score of 37.0 for the given solution. Now what
we'll do is to generate valid outputs while keeping track of which one
is best until we are satisfied:
from random import choice
def gen_solution():
groups = [[] for x in range(N_GROUPS)]
for person in range(N_PERSONS):
choice(groups).append(person)
return groups
# Generate 10000 solutions
solutions = [gen_solution() for x in range(10000)]
# Score them
solutions = [(score_solution(input, sol), sol) for sol in solutions]
# Sort by score, take the best.
best_score, best_solution = sorted(solutions)[-1]
print 'The best solution is %s with score %.2f' % (best_solution, best_score)
Running this on my computer produces:
The best solution is [[0, 1], [3, 5], [2, 4]] with score 47.00
Obviously, you may think it is a really stupid idea to randomly just
generate solutions to throw at the problem, and it is. There are much
more sophisticated methods to generate solutions such as simulated
annealing or genetic optimization. But they all build upon the same
framework as given above.

Algorithm for merging sets that share at least 2 elements

Given a list of sets:
S_1 : [ 1, 2, 3, 4 ]
S_2 : [ 3, 4, 5, 6, 7 ]
S_3 : [ 8, 9, 10, 11 ]
S_4 : [ 1, 8, 12, 13 ]
S_5 : [ 6, 7, 14, 15, 16, 17 ]
What the most efficient way to merge all sets that share at least 2 elements? I suppose this is similar to a connected components problem. So the result would be:
[ 1, 2, 3, 4, 5, 6, 7, 14, 15, 16, 17] (S_1 UNION S_2 UNION S_5)
[ 8, 9, 10, 11 ]
[ 1, 8, 12, 13 ] (S_4 shares 1 with S_1, and 8 with S_3, but not merged because they only share one element in each)
The naive implementation is O(N^2), where N is the number of sets, which is unworkable for us. This would need to be efficient for millions of sets.
Let there be a list of many Sets named (S)
Perform a pass through all elements of S, to determine the range (LOW .. HIGH).
Create an array of pointer to Set, of dimensions (LOW, HIGH), named (M).
do
Init all elements of M to NULL.
Iterate though S, processing them one Set at a time, named (Si).
Permutate all ordered pairs in Si. (P1, P2) where P1 <= P2.
For each pair examine M(P1, P2)
if M(P1, P2) is NULL
Continue with the next pair.
otherwise
Merge Si, into the Set pointed to by, M(P1, P2).
Remove Si from S, as it has been merged.
Move on to processing Set S(i + 1)
If Si was not merged,
Permutate again through Si
For each pair, make M(P1, P2) point to Si.
while At least one set was merged during the pass.
My head is saying this is about Order (2N ln N).
Take that with a grain of salt.
If you can order the elements in the set, you can look into using Mergesort on the sets. The only modification needed is to check for duplicates during the merge phase. If one is found, just discard the duplicate. Since mergesort is O(n*log(n)), this will offer imrpoved speed when compared to the naive O(n^2) algorithm.
However, to really be effective, you should maintain a sorted set and keep it sorted, so that you can skip the sort phase and go straight to the merge phase.
I don't see how this can be done in less than O(n^2).
Every set needs to be compared to every other one to see if they contain 2 or more shared elements. That's n*(n-1)/2 comparisons, therefore O(n^2), even if the check for shared elements takes constant time.
In sorting, the naive implementation is O(n^2) but you can take advantage of the transitive nature of ordered comparison (so, for example, you know nothing in the lower partition of quicksort needs to be compared to anything in the upper partition, as it's already been compared to the pivot). This is what result in sorting being O(n * log n).
This doesn't apply here. So unless there's something special about the sets that allows us to skip comparisons based on the results of previous comparisons, it's going to be O(n^2) in general.
Paul.
One side note: It depends on how often this occurs. If most pairs of sets do share at least two elements, it might be most efficient to build the new set at the same time as you are stepping through the comparison, and throw it away if they don't match the condition. If most pairs do not share at least two elements, then deferring the building of the new set until confirmation of the condition might be more efficient.
If your elements are numerical in nature, or can be naturally ordered (ie. you can assign a value such as 1, 2, 42 etc...), I would suggest using a radix sort on the merged sets, and make a second pass to pick up on the unique elements.
This algorithm should be of O(n), and you can optimize the radix sort quite a bit using bitwise shift operators and bit masks. I have done something similar for a project I was working on, and it works like a charm.

Resources