How to implement the incremental linear programming algorithm for 2-Dimension?
I am looking for implementation details of incremental algorithm of linear programming in 2-D. I am trying to work on red-blue point separation.
Related
Is it possible that a greedy algorithm is also a dynamic programming algorithm?
I took an Analysis of Algorithms class but still, I am not sure with the two concepts.
I understand that the greedy approach uses the current optimal solution to find the global optimal solution and DP algorithm reuse the overlapping sub-results.
I believe the answer is "YES" but I couldn't find a good example which is both greedy and DP algorithm.
Could someone give me an example?
If the answer to the above question is "NO" then could someone explain to me why?
From looking at the Bellman equation:
If in the minimization we can separate the f part (current period) from the J part (optimal from previous periods) then this corresponds precisely to the greedy approach. An easy example of this is when the optimization function is the sum of the costs at each period,
J(u1,u2,...)= sum(f_i(u_i)).
Here's my understanding
Greedy algorithm and dynamic algorithm are two different things. The greedy algorithm always makes the choice that seems to be the best at that moment. It will make choice as soon as the new option pops up regardless what is going to happen next.
the dynamic algorithm is combining the solution for the subprogram to get the final solution. It makes the decision based on the results of subprogram and it usually works when there's variable that influences the final solution. So, these are two kinds of way thinking.
The dynamic algorithm always works in the problem that can be solved by greedy algorithm ,but the time cost and space cost of dynamic algorithm are much higher than those of the greedy algorithm. The greedy algorithm mostly can not solve the DP problem.
So the answer is No
In optimization algorithms, the greedy approach and the dynamic programming approach are basically opposites. The greedy approach is to choose the locally optimal option, while the whole purpose of dynamic programming is to efficiently evaluate the whole range of options.
BUT that doesn't mean you can't have an algorithm that takes advantage of both strategies. The A* path-finding algorithm, for example, does just that, and is both a greedy algorithm and a dynamic programming algorithm. It uses the greedy approach to optimize the best cases, and the dynamic programming approach to optimize the worst cases.
See: https://en.wikipedia.org/wiki/A*_search_algorithm
I have tried a few backtracking algorithms and successfully converted them to dynamic programming by applying the concept of memoization.
Is it possible to convert every backtracking algorithm to dynamic programming?
If dynamic programming is so much efficient than backtracking, why we still use backtracking?
I have solved a single objective convex optimization problem (actually related to reducing interference reduction) using cvx package with MATLAB. Now I want to extend the problem to multi objective one. What are the pros-cons of solving it using genetic algorithm in comparison to cvx package? I haven't read anything about genetic algorithms and it came about by searching net for multiobjective optimization.
The optimization algorithms based on derivatives (or gradients) including convex optimization algorithm essentially try to find a local minimum. The pros and cons are as follows.
Pros:
1. It can be extremely fast since it only tries to follow the path given by derivative.
2. Sometimes, it achieves the global minimum (e.g., the problem is convex).
Cons:
1. When the problem is highly nonlinear and non-convex, the solution depends on initial point, hence there is high probability that the solution achieved is far from the global optimum.
2. It's not quite for multi-objective optimization problem.
Because of the disadvantages described above, for multi-objective optimization, we generally use evolutionary algorithm. Genetic algorithms belong to evolutionary algorithm.
Evolutionary algorithms developed for multi-objective optimization problems are fundamentally different from the gradient-based algorithms. They are population-based, i.e., maintain multiple solutions (hundreds or thousands of them) where as the latter ones maintain only one solution.
NSGA-II is an example: https://ieeexplore.ieee.org/document/996017, https://mae.ufl.edu/haftka/stropt/Lectures/multi_objective_GA.pdf, https://web.njit.edu/~horacio/Math451H/download/Seshadri_NSGA-II.pdf
The purpose of the multi-objective optimization is find the Pareto surface (or optimal trade-off surface). Since the surface consists of multiple points, population-based evolutionary algorithms suit well.
(You can solve a series of single objective optimization problems using gradient-based algorithms, but unless the feasible set is convex, it cannot find them accurately.)
I am taking a parallel programming class and am really struggling with understanding some of the computational complexity calculations and algebraic simplifications.Specifically for the bitonic sort algorithm I am looking when each processor is given a block of elements.
I am looking at situations when either a hypercube or 2D mesh interconnection network is used. I am given the following definitions for the calculation of Speedup, Efficiency, Iso-Efficiency and determining if the solution is cost optimal. I can understand how Speedup is determined but am totally lost as to how to solve for Efficiency and Iso-Efficiency. I think I understand cost-optimality as well. Given below are the equations.
The text which I am using for the class is
Introduction to Parallel Computing, 2nd Edition by Ananth Grama, Anshul Gupta, George Karypis & Vipin Kumar
For my question regarding the algebra of this problem, please refer to this
There is an algorithm for triangulating a polygon in linear time due to Chazelle (1991), but, AFAIK, there aren't any standard implementations of his algorithm in general mathematical software libraries.
Does anyone know of such an implementation?
See this answer to the question Powerful algorithms too complex to implement:
According to Skiena (author of The Algorithm Design Manual), "[the] algorithm is quite hopeless to implement."
I've looked for an implementation before, but couldn't find one. I think it's safe to assume no-one has implemented it due to its complexity, and I think it also has quite a large constant factor so wouldn't do well against O(n lg n) algorithms that have smaller constant factors.
This is claimed to be an
Implementation of Chazelle's Algorithm for Triangulating a Simple Polygon in Linear Time (mainly C++ and C).