Support Resistance Algorithm - Technical analysis [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 months ago.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I have an intra-day chart and I am trying to figure out how to calculate
support and resistance levels, anyone knows an algorithm for doing that, or a good starting point?

Yes, a very simple algorithm is to choose a timeframe, say 100 bars, then look for local turning points, or Maxima and Minima. Maxima and Minima can be computed from a smoothed closing price by using the 1st and second derivative (dy/dx and d^2y/dx). Where dy/dx = zero and d^y/dx is positive, you have a minima, when dy/dx = zero and d^2y/dx is negative, you have a maxima.
In practical terms this could be computed by iterating over your smoothed closing price series and looking at three adjacent points. If the points are lower/higher/lower in relative terms then you have a maxima, else higher/lower/higher you have a minima. You may wish to fine-tune this detection method to look at more points (say 5, 7) and only trigger if the edge points are a certain % away from the centre point. this is similar to the algorithm that the ZigZag indicator uses.
Once you have local maxima and minima, you then want to look for clusters of turning points within a certain distance of each other in the Y-Direction. this is simple. Take the list of N turning points and compute the Y-distance between it and each of the other discovered turning points. If the distance is less than a fixed constant then you have found two "close" turning points, indicating possible support/resistance.
You could then rank your S/R lines, so two turning points at $20 is less important than three turning points at $20 for instance.
An extension to this would be to compute trendlines. With the list of turning points discovered now take each point in turn and select two other points, trying to fit a straight line equation. If the equation is solvable within a certain error margin, you have a sloping trendline. If not, discard and move on to the next triplet of points.
The reason why you need three at a time to compute trendlines is any two points can be used in the straight line equation. Another way to compute trendlines would be to compute the straight line equation of all pairs of turning points, then see if a third point (or more than one) lies on the same straight line within a margin of error. If 1 or more other points does lie on this line, bingo you have calculated a Support/Resistance trendline.
No code examples sorry, I'm just giving you some ideas on how it could be done. In summary:
Inputs to the system
Lookback period L (number of bars)
Closing prices for L bars
Smoothing factor (to smooth closing price)
Error Margin or Delta (minimum distance between turning points to constitute a match)
Outputs
List of turning points, call them tPoints[] (x,y)
List of potential trendlines, each with the line equation (y = mx + c)
EDIT: Update
I recently learned a very simple indicator called a Donchian Channel, which basically plots a channel of the highest high in 20 bars, and lowest low. It can be used to plot an approximate support resistance level. But the above - Donchian Channel with turning points is cooler ^_^

I am using a much less complex algorithm in my algorithmic trading system.
Following steps are one side of the algorithm and are used for calculating support levels. Please read notes below the algorithm to understand how to calculate resistance levels.
Algorithm
Break timeseries into segments of size N (Say, N = 5)
Identify minimum values of each segment, you will have an array of minimum values from all segments = :arrayOfMin
Find minimum of (:arrayOfMin) = :minValue
See if any of the remaining values fall within range (X% of :minValue) (Say, X = 1.3%)
Make a separate array (:supportArr)
add values within range & remove these values from :arrayOfMin
also add :minValue from step 3
Calculating support (or resistance)
Take a mean of this array = support_level
If support is tested many times, then it is considered strong.
strength_of_support = supportArr.length
level_type (SUPPORT|RESISTANCE) = Now, if current price is below support then support changes role and becomes resistance
Repeat steps 3 to 7 until :arrayOfMin is empty
You will have all support/resistance values with a strength. Now smoothen these values, if any support levels are too close then eliminate one of them.
These support/resistance were calculated considering support levels search. You need perform steps 2 to 9 considering resistance levels search. Please see notes and implementation.
Notes:
Adjust the values of N & X to get more accurate results.
Example, for less volatile stocks or equity indexes use (N = 10, X = 1.2%)
For high volatile stocks use (N = 22, X = 1.5%)
For resistance, the procedure is exactly opposite (use maximum function instead of minimum)
This algorithm was purposely kept simple to avoid complexity, it can be improved to give better results.
Here's my implementation:
public interface ISupportResistanceCalculator {
/**
* Identifies support / resistance levels.
*
* #param timeseries
* timeseries
* #param beginIndex
* starting point (inclusive)
* #param endIndex
* ending point (exclusive)
* #param segmentSize
* number of elements per internal segment
* #param rangePct
* range % (Example: 1.5%)
* #return A tuple with the list of support levels and a list of resistance
* levels
*/
Tuple<List<Level>, List<Level>> identify(List<Float> timeseries,
int beginIndex, int endIndex, int segmentSize, float rangePct);
}
Main calculator class
/**
*
*/
package com.perseus.analysis.calculator.technical.trend;
import static com.perseus.analysis.constant.LevelType.RESISTANCE;
import static com.perseus.analysis.constant.LevelType.SUPPORT;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Date;
import java.util.LinkedList;
import java.util.List;
import java.util.Set;
import java.util.TreeSet;
import com.google.common.collect.Lists;
import com.perseus.analysis.calculator.mean.IMeanCalculator;
import com.perseus.analysis.calculator.timeseries.ITimeSeriesCalculator;
import com.perseus.analysis.constant.LevelType;
import com.perseus.analysis.model.Tuple;
import com.perseus.analysis.model.technical.Level;
import com.perseus.analysis.model.timeseries.ITimeseries;
import com.perseus.analysis.util.CollectionUtils;
/**
* A support and resistance calculator.
*
* #author PRITESH
*
*/
public class SupportResistanceCalculator implements
ISupportResistanceCalculator {
static interface LevelHelper {
Float aggregate(List<Float> data);
LevelType type(float level, float priceAsOfDate, final float rangePct);
boolean withinRange(Float node, float rangePct, Float val);
}
static class Support implements LevelHelper {
#Override
public Float aggregate(final List<Float> data) {
return Collections.min(data);
}
#Override
public LevelType type(final float level, final float priceAsOfDate,
final float rangePct) {
final float threshold = level * (1 - (rangePct / 100));
return (priceAsOfDate < threshold) ? RESISTANCE : SUPPORT;
}
#Override
public boolean withinRange(final Float node, final float rangePct,
final Float val) {
final float threshold = node * (1 + (rangePct / 100f));
if (val < threshold)
return true;
return false;
}
}
static class Resistance implements LevelHelper {
#Override
public Float aggregate(final List<Float> data) {
return Collections.max(data);
}
#Override
public LevelType type(final float level, final float priceAsOfDate,
final float rangePct) {
final float threshold = level * (1 + (rangePct / 100));
return (priceAsOfDate > threshold) ? SUPPORT : RESISTANCE;
}
#Override
public boolean withinRange(final Float node, final float rangePct,
final Float val) {
final float threshold = node * (1 - (rangePct / 100f));
if (val > threshold)
return true;
return false;
}
}
private static final int SMOOTHEN_COUNT = 2;
private static final LevelHelper SUPPORT_HELPER = new Support();
private static final LevelHelper RESISTANCE_HELPER = new Resistance();
private final ITimeSeriesCalculator tsCalc;
private final IMeanCalculator meanCalc;
public SupportResistanceCalculator(final ITimeSeriesCalculator tsCalc,
final IMeanCalculator meanCalc) {
super();
this.tsCalc = tsCalc;
this.meanCalc = meanCalc;
}
#Override
public Tuple<List<Level>, List<Level>> identify(
final List<Float> timeseries, final int beginIndex,
final int endIndex, final int segmentSize, final float rangePct) {
final List<Float> series = this.seriesToWorkWith(timeseries,
beginIndex, endIndex);
// Split the timeseries into chunks
final List<List<Float>> segments = this.splitList(series, segmentSize);
final Float priceAsOfDate = series.get(series.size() - 1);
final List<Level> levels = Lists.newArrayList();
this.identifyLevel(levels, segments, rangePct, priceAsOfDate,
SUPPORT_HELPER);
this.identifyLevel(levels, segments, rangePct, priceAsOfDate,
RESISTANCE_HELPER);
final List<Level> support = Lists.newArrayList();
final List<Level> resistance = Lists.newArrayList();
this.separateLevels(support, resistance, levels);
// Smoothen the levels
this.smoothen(support, resistance, rangePct);
return new Tuple<>(support, resistance);
}
private void identifyLevel(final List<Level> levels,
final List<List<Float>> segments, final float rangePct,
final float priceAsOfDate, final LevelHelper helper) {
final List<Float> aggregateVals = Lists.newArrayList();
// Find min/max of each segment
for (final List<Float> segment : segments) {
aggregateVals.add(helper.aggregate(segment));
}
while (!aggregateVals.isEmpty()) {
final List<Float> withinRange = new ArrayList<>();
final Set<Integer> withinRangeIdx = new TreeSet<>();
// Support/resistance level node
final Float node = helper.aggregate(aggregateVals);
// Find elements within range
for (int i = 0; i < aggregateVals.size(); ++i) {
final Float f = aggregateVals.get(i);
if (helper.withinRange(node, rangePct, f)) {
withinRangeIdx.add(i);
withinRange.add(f);
}
}
// Remove elements within range
CollectionUtils.remove(aggregateVals, withinRangeIdx);
// Take an average
final float level = this.meanCalc.mean(
withinRange.toArray(new Float[] {}), 0, withinRange.size());
final float strength = withinRange.size();
levels.add(new Level(helper.type(level, priceAsOfDate, rangePct),
level, strength));
}
}
private List<List<Float>> splitList(final List<Float> series,
final int segmentSize) {
final List<List<Float>> splitList = CollectionUtils
.convertToNewLists(CollectionUtils.splitList(series,
segmentSize));
if (splitList.size() > 1) {
// If last segment it too small
final int lastIdx = splitList.size() - 1;
final List<Float> last = splitList.get(lastIdx);
if (last.size() <= (segmentSize / 1.5f)) {
// Remove last segment
splitList.remove(lastIdx);
// Move all elements from removed last segment to new last
// segment
splitList.get(lastIdx - 1).addAll(last);
}
}
return splitList;
}
private void separateLevels(final List<Level> support,
final List<Level> resistance, final List<Level> levels) {
for (final Level level : levels) {
if (level.getType() == SUPPORT) {
support.add(level);
} else {
resistance.add(level);
}
}
}
private void smoothen(final List<Level> support,
final List<Level> resistance, final float rangePct) {
for (int i = 0; i < SMOOTHEN_COUNT; ++i) {
this.smoothen(support, rangePct);
this.smoothen(resistance, rangePct);
}
}
/**
* Removes one of the adjacent levels which are close to each other.
*/
private void smoothen(final List<Level> levels, final float rangePct) {
if (levels.size() < 2)
return;
final List<Integer> removeIdx = Lists.newArrayList();
Collections.sort(levels);
for (int i = 0; i < (levels.size() - 1); i++) {
final Level currentLevel = levels.get(i);
final Level nextLevel = levels.get(i + 1);
final Float current = currentLevel.getLevel();
final Float next = nextLevel.getLevel();
final float difference = Math.abs(next - current);
final float threshold = (current * rangePct) / 100;
if (difference < threshold) {
final int remove = currentLevel.getStrength() >= nextLevel
.getStrength() ? i : i + 1;
removeIdx.add(remove);
i++; // start with next pair
}
}
CollectionUtils.remove(levels, removeIdx);
}
private List<Float> seriesToWorkWith(final List<Float> timeseries,
final int beginIndex, final int endIndex) {
if ((beginIndex == 0) && (endIndex == timeseries.size()))
return timeseries;
return timeseries.subList(beginIndex, endIndex);
}
}
Here are some supporting classes:
public enum LevelType {
SUPPORT, RESISTANCE
}
public class Tuple<A, B> {
private final A a;
private final B b;
public Tuple(final A a, final B b) {
super();
this.a = a;
this.b = b;
}
public final A getA() {
return this.a;
}
public final B getB() {
return this.b;
}
#Override
public String toString() {
return "Tuple [a=" + this.a + ", b=" + this.b + "]";
};
}
public abstract class CollectionUtils {
/**
* Removes items from the list based on their indexes.
*
* #param list
* list
* #param indexes
* indexes this collection must be sorted in ascending order
*/
public static <T> void remove(final List<T> list,
final Collection<Integer> indexes) {
int i = 0;
for (final int idx : indexes) {
list.remove(idx - i++);
}
}
/**
* Splits the given list in segments of the specified size.
*
* #param list
* list
* #param segmentSize
* segment size
* #return segments
*/
public static <T> List<List<T>> splitList(final List<T> list,
final int segmentSize) {
int from = 0, to = 0;
final List<List<T>> result = new ArrayList<>();
while (from < list.size()) {
to = from + segmentSize;
if (to > list.size()) {
to = list.size();
}
result.add(list.subList(from, to));
from = to;
}
return result;
}
}
/**
* This class represents a support / resistance level.
*
* #author PRITESH
*
*/
public class Level implements Serializable {
private static final long serialVersionUID = -7561265699198045328L;
private final LevelType type;
private final float level, strength;
public Level(final LevelType type, final float level) {
this(type, level, 0f);
}
public Level(final LevelType type, final float level, final float strength) {
super();
this.type = type;
this.level = level;
this.strength = strength;
}
public final LevelType getType() {
return this.type;
}
public final float getLevel() {
return this.level;
}
public final float getStrength() {
return this.strength;
}
#Override
public String toString() {
return "Level [type=" + this.type + ", level=" + this.level
+ ", strength=" + this.strength + "]";
}
}

I put together a package that implements support and resistance trendlines like what you're asking about. Here are a few examples of some examples:
import numpy as np
import pandas.io.data as pd
from matplotlib.pyplot import *
gentrends('fb', window = 1.0/3.0)
Output
That example just pulls the adjusted close prices, but if you have intraday data already loaded in you can also feed it raw data as a numpy array and it will implement the same algorithm on that data as it would if you just fed it a ticker symbol.
Not sure if this is exactly what you were looking for but hopefully this helps get you started. The code and some more explanation can be found on the GitHub page where I have it hosted: https://github.com/dysonance/Trendy

I have figured out another way of calculating Support/Resistance dynamically.
Steps:
Create a list of important price - The high and low of each candle in your range is important. Each of this prices is basically a probable SR(Support / Resistance).
Give each price a score.
Sort the prices by score and remove the ones close to each other(at a distance of x% from each other).
Print the top N prices and having a mimimum score of Y. These are your Support Resistances. It worked very well for me in ~300 different stocks.
The scoring technique
A price is acting as a strong SR if there are many candles which comes close to this but cannot cross this.
So, for each candle which are close to this price (within a distance of y% from the price), we will add +S1 to the score.
For each candle which cuts through this price, we will add -S2(negative) to the score.
This should give you a very basic idea of how to assign scores to this.
Now you have to tweak it according to your requirements.
Some tweak I made and which improved the performance a lot are as follows:
Different score for different types of cut. If the body of a candle cuts through the price, then score change is -S3 but the wick of a candle cuts through the price, the score change is -S4. Here Abs(S3) > Abs(S4) because cut by body is more significant than cut by wick.
If the candle which closes close the price but unable to cross is a high(higher than two candles on each side) or low(lower than 2 candles on each side), then add a higher score than other normal candles closing near this.
If the candle closing near this is a high or low, and the price was in a downtrend or a uptrend (at least y% move) then add a higher score to this point.
You can remove some prices from the initial list. I consider a price only if it is the highest or the lowest among N candles on both side of it.
Here is a snippet of my code.
private void findSupportResistance(List<Candle> candles, Long scripId) throws ExecutionException {
// This is a cron job, so I skip for some time once a SR is found in a stock
if(processedCandles.getIfPresent(scripId) == null || checkAlways) {
//Combining small candles to get larger candles of required timeframe. ( I have 1 minute candles and here creating 1 Hr candles)
List<Candle> cumulativeCandles = cumulativeCandleHelper.getCumulativeCandles(candles, CUMULATIVE_CANDLE_SIZE);
//Tell whether each point is a high(higher than two candles on each side) or a low(lower than two candles on each side)
List<Boolean> highLowValueList = this.highLow.findHighLow(cumulativeCandles);
String name = scripIdCache.getScripName(scripId);
Set<Double> impPoints = new HashSet<Double>();
int pos = 0;
for(Candle candle : cumulativeCandles){
//A candle is imp only if it is the highest / lowest among #CONSECUTIVE_CANDLE_TO_CHECK_MIN on each side
List<Candle> subList = cumulativeCandles.subList(Math.max(0, pos - CONSECUTIVE_CANDLE_TO_CHECK_MIN),
Math.min(cumulativeCandles.size(), pos + CONSECUTIVE_CANDLE_TO_CHECK_MIN));
if(subList.stream().min(Comparator.comparing(Candle::getLow)).get().getLow().equals(candle.getLow()) ||
subList.stream().min(Comparator.comparing(Candle::getHigh)).get().getHigh().equals(candle.getHigh())) {
impPoints.add(candle.getHigh());
impPoints.add(candle.getLow());
}
pos++;
}
Iterator<Double> iterator = impPoints.iterator();
List<PointScore> score = new ArrayList<PointScore>();
while (iterator.hasNext()){
Double currentValue = iterator.next();
//Get score of each point
score.add(getScore(cumulativeCandles, highLowValueList, currentValue));
}
score.sort((o1, o2) -> o2.getScore().compareTo(o1.getScore()));
List<Double> used = new ArrayList<Double>();
int total = 0;
Double min = getMin(cumulativeCandles);
Double max = getMax(cumulativeCandles);
for(PointScore pointScore : score){
// Each point should have at least #MIN_SCORE_TO_PRINT point
if(pointScore.getScore() < MIN_SCORE_TO_PRINT){
break;
}
//The extremes always come as a Strong SR, so I remove some of them
// I also reject a price which is very close the one already used
if (!similar(pointScore.getPoint(), used) && !closeFromExtreme(pointScore.getPoint(), min, max)) {
logger.info("Strong SR for scrip {} at {} and score {}", name, pointScore.getPoint(), pointScore.getScore());
// logger.info("Events at point are {}", pointScore.getPointEventList());
used.add(pointScore.getPoint());
total += 1;
}
if(total >= totalPointsToPrint){
break;
}
}
}
}
private boolean closeFromExtreme(Double key, Double min, Double max) {
return Math.abs(key - min) < (min * DIFF_PERC_FROM_EXTREME / 100.0) || Math.abs(key - max) < (max * DIFF_PERC_FROM_EXTREME / 100);
}
private Double getMin(List<Candle> cumulativeCandles) {
return cumulativeCandles.stream()
.min(Comparator.comparing(Candle::getLow)).get().getLow();
}
private Double getMax(List<Candle> cumulativeCandles) {
return cumulativeCandles.stream()
.max(Comparator.comparing(Candle::getLow)).get().getHigh();
}
private boolean similar(Double key, List<Double> used) {
for(Double value : used){
if(Math.abs(key - value) <= (DIFF_PERC_FOR_INTRASR_DISTANCE * value / 100)){
return true;
}
}
return false;
}
private PointScore getScore(List<Candle> cumulativeCandles, List<Boolean> highLowValueList, Double price) {
List<PointEvent> events = new ArrayList<>();
Double score = 0.0;
int pos = 0;
int lastCutPos = -10;
for(Candle candle : cumulativeCandles){
//If the body of the candle cuts through the price, then deduct some score
if(cutBody(price, candle) && (pos - lastCutPos > MIN_DIFF_FOR_CONSECUTIVE_CUT)){
score += scoreForCutBody;
lastCutPos = pos;
events.add(new PointEvent(PointEvent.Type.CUT_BODY, candle.getTimestamp(), scoreForCutBody));
//If the wick of the candle cuts through the price, then deduct some score
} else if(cutWick(price, candle) && (pos - lastCutPos > MIN_DIFF_FOR_CONSECUTIVE_CUT)){
score += scoreForCutWick;
lastCutPos = pos;
events.add(new PointEvent(PointEvent.Type.CUT_WICK, candle.getTimestamp(), scoreForCutWick));
//If the if is close the high of some candle and it was in an uptrend, then add some score to this
} else if(touchHigh(price, candle) && inUpTrend(cumulativeCandles, price, pos)){
Boolean highLowValue = highLowValueList.get(pos);
//If it is a high, then add some score S1
if(highLowValue != null && highLowValue){
score += scoreForTouchHighLow;
events.add(new PointEvent(PointEvent.Type.TOUCH_UP_HIGHLOW, candle.getTimestamp(), scoreForTouchHighLow));
//Else add S2. S2 > S1
} else {
score += scoreForTouchNormal;
events.add(new PointEvent(PointEvent.Type.TOUCH_UP, candle.getTimestamp(), scoreForTouchNormal));
}
//If the if is close the low of some candle and it was in an downtrend, then add some score to this
} else if(touchLow(price, candle) && inDownTrend(cumulativeCandles, price, pos)){
Boolean highLowValue = highLowValueList.get(pos);
//If it is a high, then add some score S1
if (highLowValue != null && !highLowValue) {
score += scoreForTouchHighLow;
events.add(new PointEvent(PointEvent.Type.TOUCH_DOWN, candle.getTimestamp(), scoreForTouchHighLow));
//Else add S2. S2 > S1
} else {
score += scoreForTouchNormal;
events.add(new PointEvent(PointEvent.Type.TOUCH_DOWN_HIGHLOW, candle.getTimestamp(), scoreForTouchNormal));
}
}
pos += 1;
}
return new PointScore(price, score, events);
}
private boolean inDownTrend(List<Candle> cumulativeCandles, Double price, int startPos) {
//Either move #MIN_PERC_FOR_TREND in direction of trend, or cut through the price
for(int pos = startPos; pos >= 0; pos-- ){
Candle candle = cumulativeCandles.get(pos);
if(candle.getLow() < price){
return false;
}
if(candle.getLow() - price > (price * MIN_PERC_FOR_TREND / 100)){
return true;
}
}
return false;
}
private boolean inUpTrend(List<Candle> cumulativeCandles, Double price, int startPos) {
for(int pos = startPos; pos >= 0; pos-- ){
Candle candle = cumulativeCandles.get(pos);
if(candle.getHigh() > price){
return false;
}
if(price - candle.getLow() > (price * MIN_PERC_FOR_TREND / 100)){
return true;
}
}
return false;
}
private boolean touchHigh(Double price, Candle candle) {
Double high = candle.getHigh();
Double ltp = candle.getLtp();
return high <= price && Math.abs(high - price) < ltp * DIFF_PERC_FOR_CANDLE_CLOSE / 100;
}
private boolean touchLow(Double price, Candle candle) {
Double low = candle.getLow();
Double ltp = candle.getLtp();
return low >= price && Math.abs(low - price) < ltp * DIFF_PERC_FOR_CANDLE_CLOSE / 100;
}
private boolean cutBody(Double point, Candle candle) {
return Math.max(candle.getOpen(), candle.getClose()) > point && Math.min(candle.getOpen(), candle.getClose()) < point;
}
private boolean cutWick(Double price, Candle candle) {
return !cutBody(price, candle) && candle.getHigh() > price && candle.getLow() < price;
}
Some Helper classes:
public class PointScore {
Double point;
Double score;
List<PointEvent> pointEventList;
public PointScore(Double point, Double score, List<PointEvent> pointEventList) {
this.point = point;
this.score = score;
this.pointEventList = pointEventList;
}
}
public class PointEvent {
public enum Type{
CUT_BODY, CUT_WICK, TOUCH_DOWN_HIGHLOW, TOUCH_DOWN, TOUCH_UP_HIGHLOW, TOUCH_UP;
}
Type type;
Date timestamp;
Double scoreChange;
public PointEvent(Type type, Date timestamp, Double scoreChange) {
this.type = type;
this.timestamp = timestamp;
this.scoreChange = scoreChange;
}
#Override
public String toString() {
return "PointEvent{" +
"type=" + type +
", timestamp=" + timestamp +
", points=" + scoreChange +
'}';
}
}
Some example of SR created by the code.

Here's a python function to find support / resistance levels
This function takes a numpy array of last traded price and returns a
list of support and resistance levels respectively. n is the number
of entries to be scanned.
def supres(ltp, n):
"""
This function takes a numpy array of last traded price
and returns a list of support and resistance levels
respectively. n is the number of entries to be scanned.
"""
from scipy.signal import savgol_filter as smooth
# converting n to a nearest even number
if n % 2 != 0:
n += 1
n_ltp = ltp.shape[0]
# smoothening the curve
ltp_s = smooth(ltp, (n + 1), 3)
# taking a simple derivative
ltp_d = np.zeros(n_ltp)
ltp_d[1:] = np.subtract(ltp_s[1:], ltp_s[:-1])
resistance = []
support = []
for i in xrange(n_ltp - n):
arr_sl = ltp_d[i:(i + n)]
first = arr_sl[:(n / 2)] # first half
last = arr_sl[(n / 2):] # second half
r_1 = np.sum(first > 0)
r_2 = np.sum(last < 0)
s_1 = np.sum(first < 0)
s_2 = np.sum(last > 0)
# local maxima detection
if (r_1 == (n / 2)) and (r_2 == (n / 2)):
resistance.append(ltp[i + ((n / 2) - 1)])
# local minima detection
if (s_1 == (n / 2)) and (s_2 == (n / 2)):
support.append(ltp[i + ((n / 2) - 1)])
return support, resistance
SRC

The best way I have found to get SR levels is with clustering. Maxima and Minima is calculated and then those values are flattened (like a scatter plot where x is the maxima and minima values and y is always 1). You then cluster these values using Sklearn.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import AgglomerativeClustering
# Calculate VERY simple waves
mx = df.High_15T.rolling( 100 ).max().rename('waves')
mn = df.Low_15T.rolling( 100 ).min().rename('waves')
mx_waves = pd.concat([mx,pd.Series(np.zeros(len(mx))+1)],axis = 1)
mn_waves = pd.concat([mn,pd.Series(np.zeros(len(mn))+-1)],axis = 1)
mx_waves.drop_duplicates('waves',inplace = True)
mn_waves.drop_duplicates('waves',inplace = True)
W = mx_waves.append(mn_waves).sort_index()
W = W[ W[0] != W[0].shift() ].dropna()
# Find Support/Resistance with clustering
# Create [x,y] array where y is always 1
X = np.concatenate((W.waves.values.reshape(-1,1),
(np.zeros(len(W))+1).reshape(-1,1)), axis = 1 )
# Pick n_clusters, I chose the sqrt of the df + 2
n = round(len(W)**(1/2)) + 2
cluster = AgglomerativeClustering(n_clusters=n,
affinity='euclidean', linkage='ward')
cluster.fit_predict(X)
W['clusters'] = cluster.labels_
# I chose to get the index of the max wave for each cluster
W2 = W.loc[W.groupby('clusters')['waves'].idxmax()]
# Plotit
fig, axis = plt.subplots()
for row in W2.itertuples():
axis.axhline( y = row.waves,
color = 'green', ls = 'dashed' )
axis.plot( W.index.values, W.waves.values )
plt.show()

Here is the PineScript code for S/Rs. It doesn't include all the logic Dr. Andrew or Nilendu discuss, but definitely a good start:
https://www.tradingview.com/script/UUUyEoU2-S-R-Barry-extended-by-PeterO/
//#version=3
study(title="S/R Barry, extended by PeterO", overlay=true)
FractalLen=input(10)
isFractal(x) => highestbars(x,FractalLen*2+1)==-FractalLen
sF=isFractal(-low), support=low, support:=sF ? low[FractalLen] : support[1]
rF=isFractal(high), resistance=high, resistance:=rF ? high[FractalLen] : resistance[1]
plot(series=support, color=sF?#00000000:blue, offset=-FractalLen)
plot(series=resistance, color=rF?#00000000:red, offset=-FractalLen)
supportprevious=low, supportprevious:=sF ? support[1] : supportprevious[1]
resistanceprevious=low, resistanceprevious:=rF ? resistance[1] : resistanceprevious[1]
plot(series=supportprevious, color=blue, style=circles, offset=-FractalLen)
plot(series=resistanceprevious, color=red, style=circles, offset=-FractalLen)

I'm not sure if it's really "Support & Resistance" detection but what about this:
function getRanges(_nums=[], _diff=1, percent=true) {
let nums = [..._nums];
nums.sort((a,b) => a-b);
const ranges = [];
for (let i=0; i<nums.length; i+=1) {
const num = nums[i];
const diff = percent ? perc(_diff, num) : _diff;
const range = nums.filter( j => isInRange(j, num-diff, num+diff) );
if (range.length) {
ranges.push(range);
nums = nums.slice(range.length);
i = -1;
}
}
return ranges;
}
function perc(percent, n) {
return n * (percent * 0.01);
}
function isInRange(n, min, max) {
return n >= min && n <= max;
}
So let's say you have an array of close prices:
const nums = [12, 14, 15, 17, 18, 19, 19, 21, 28, 29, 30, 30, 31, 32, 34, 34, 36, 39, 43, 44, 48, 48, 48, 51, 52, 58, 60, 61, 67, 68, 69, 73, 73, 75, 87, 89, 94, 95, 96, 98];
and you want to kinda split the numbers by an amount, like difference of 5 (or 5%), then you would get back a result array like this:
const ranges = getRanges(nums, 5, false) // ranges of -5 to +5
/* [
[12, 14, 15, 17]
[18, 19, 19, 21]
[28, 29, 30, 30, 31, 32]
[34, 34, 36, 39]
[43, 44, 48, 48, 48]
[51, 52]
[58, 60, 61]
[67, 68, 69]
[73, 73, 75]
[87, 89]
[94, 95, 96, 98]
]
*/
// or like
//const ranges = getRanges(nums, 5, true) // ranges of -5% to +5%
therefore the more length a range has, the more important of a support/resistance area it is.
(again: not sure if this could be classified as "Support & Resistance")

I briefly read Jacob's contribution. I think it may have some issues with the code below:
# Now the min
if min1 - window < 0:
min2 = min(x[(min1 + window):])
else:
min2 = min(x[0:(min1 - window)])
# Now find the indices of the secondary extrema
max2 = np.where(x == max2)[0][0] # find the index of the 2nd max
min2 = np.where(x == min2)[0][0] # find the index of the 2nd min
The algorithm does try to find secondary min value outside given window, but then the position corresponding to np.where(x == min2)[0][0] may lie inside the the window due to possibly duplicate values inside the window.

If you are looking for horizontal SR lines, I would rather want to know the whole distribution. But I think it is also a good assumption to just take the max of your histogram.
# python + pandas
spy["Close"][:60].plot()
hist, border = np.histogram(spy["Close"][:60].values, density=False)
sr = border[np.argmax(hist)]
plt.axhline(y=sr, color='r', linestyle='-')
You might need to tweak the bins and eventually you want to plot the whole bin not just the lower bound.
lower_bound = border[np.argmax(hist)]
upper_bound = border[np.argmax(hist) + 1]
PS the underlying "idea" is very similar to #Nilendu's solution.

Interpretations of Support & Resistance levels is very subjective. A lot of people do it different ways. […] When I am evaluating S&R from the charts, I am looking for two primary things:
Bounce off - There needs to be a visible departure (bounce off) from the horizontal line which is perceived to define the level of support or resistance.
Multiple touches - A single touch turning point is not sufficient to indicate establish support or resistance levels. Multiple touches to the same approximately level should be present, such that a horizontal line could be drawn through those turning points.

Related

Paper cut algorithm

I want to create a function to determine the most number of pieces of paper on a parent paper size
The formula above is still not optimal. If using the above formula will only produce at most 32 cut/sheet.
I want it like below.
This seems to be a very difficult problem to solve optimally. See http://lagrange.ime.usp.br/~lobato/packing/ for a discussion of a 2008 paper claiming that the problem is believed (but not proven) to be NP-hard. The researchers found some approximation algorithms and implemented them on that website.
The following solution uses Top-Down Dynamic Programming to find optimal solutions to this problem. I am providing this solution in C#, which shouldn't be too hard to convert into the language of your choice (or whatever style of pseudocode you prefer). I have tested this solution on your specific example and it completes in less than a second (I'm not sure how much less than a second).
It should be noted that this solution assumes that only guillotine cuts are allowed. This is a common restriction for real-world 2D Stock-Cutting applications and it greatly simplifies the solution complexity. However, CS, Math and other programming problems often allow all types of cutting, so in that case this solution would not necessarily find the optimal solution (but it would still provide a better heuristic answer than your current formula).
First, we need a value-structure to represent the size of the starting stock, the desired rectangle(s) and of the pieces cut from the stock (this needs to be a value-type because it will be used as the key to our memoization cache and other collections, and we need to to compare the actual values rather than an object reference address):
public struct Vector2D
{
public int X;
public int Y;
public Vector2D(int x, int y)
{
X = x;
Y = y;
}
}
Here is the main method to be called. Note that all values need to be in integers, for the specific case above this just means multiplying everything by 100. These methods here require integers, but are otherwise are scale-invariant so multiplying by 100 or 1000 or whatever won't affect performance (just make sure that the values don't overflow an int).
public int SolveMaxCount1R(Vector2D Parent, Vector2D Item)
{
// make a list to hold both the item size and its rotation
List<Vector2D> itemSizes = new List<Vector2D>();
itemSizes.Add(Item);
if (Item.X != Item.Y)
{
itemSizes.Add(new Vector2D(Item.Y, Item.X));
}
int solution = SolveGeneralMaxCount(Parent, itemSizes.ToArray());
return solution;
}
Here is an example of how you would call this method with your parameter values. In this case I have assumed that all of the solution methods are part of a class called SolverClass:
SolverClass solver = new SolverClass();
int count = solver.SolveMaxCount1R(new Vector2D(2500, 3800), new Vector2D(425, 550));
//(all units are in tenths of a millimeter to make everything integers)
The main method calls a general solver method for this type of problem (that is not restricted to just one size rectangle and its rotation):
public int SolveGeneralMaxCount(Vector2D Parent, Vector2D[] ItemSizes)
{
// determine the maximum x and y scaling factors using GCDs (Greastest
// Common Divisor)
List<int> xValues = new List<int>();
List<int> yValues = new List<int>();
foreach (Vector2D size in ItemSizes)
{
xValues.Add(size.X);
yValues.Add(size.Y);
}
xValues.Add(Parent.X);
yValues.Add(Parent.Y);
int xScale = NaturalNumbers.GCD(xValues);
int yScale = NaturalNumbers.GCD(yValues);
// rescale our parameters
Vector2D parent = new Vector2D(Parent.X / xScale, Parent.Y / yScale);
var baseShapes = new Dictionary<Vector2D, Vector2D>();
foreach (var size in ItemSizes)
{
var reducedSize = new Vector2D(size.X / xScale, size.Y / yScale);
baseShapes.Add(reducedSize, reducedSize);
}
//determine the minimum values that an allowed item shape can fit into
_xMin = int.MaxValue;
_yMin = int.MaxValue;
foreach (var size in baseShapes.Keys)
{
if (size.X < _xMin) _xMin = size.X;
if (size.Y < _yMin) _yMin = size.Y;
}
// create the memoization cache for shapes
Dictionary<Vector2D, SizeCount> shapesCache = new Dictionary<Vector2D, SizeCount>();
// find the solution pattern with the most finished items
int best = solveGMC(shapesCache, baseShapes, parent);
return best;
}
private int _xMin;
private int _yMin;
The general solution method calls a recursive worker method that does most of the actual work.
private int solveGMC(
Dictionary<Vector2D, SizeCount> shapeCache,
Dictionary<Vector2D, Vector2D> baseShapes,
Vector2D sheet )
{
// have we already solved this size?
if (shapeCache.ContainsKey(sheet)) return shapeCache[sheet].ItemCount;
SizeCount item = new SizeCount(sheet, 0);
if ((sheet.X < _xMin) || (sheet.Y < _yMin))
{
// if it's too small in either dimension then this is a scrap piece
item.ItemCount = 0;
}
else // try every way of cutting this sheet (guillotine cuts only)
{
int child0;
int child1;
// try every size of horizontal guillotine cut
for (int c = sheet.X / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(c, sheet.Y));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X - c, sheet.Y));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// try every size of vertical guillotine cut
for (int c = sheet.Y / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, c));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, sheet.Y - c));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// if no children returned finished items, then the sheet is
// either scrap or a finished item itself
if (item.ItemCount == 0)
{
if (baseShapes.ContainsKey(item.Size))
{
item.ItemCount = 1;
}
else
{
item.ItemCount = 0;
}
}
}
// add the item to the cache before we return it
shapeCache.Add(item.Size, item);
return item.ItemCount;
}
Finally, the general solution method uses a GCD function to rescale the dimensions to achieve scale-invariance. This is implemented in a static class called NaturalNumbers. I have included the rlevant parts of this class below:
static class NaturalNumbers
{
/// <summary>
/// Returns the Greatest Common Divisor of two natural numbers.
/// Returns Zero if either number is Zero,
/// Returns One if either number is One and both numbers are >Zero
/// </summary>
public static int GCD(int a, int b)
{
if ((a == 0) || (b == 0)) return 0;
if (a >= b)
return gcd_(a, b);
else
return gcd_(b, a);
}
/// <summary>
/// Returns the Greatest Common Divisor of a list of natural numbers.
/// (Note: will run fastest if the list is in ascending order)
/// </summary>
public static int GCD(IEnumerable<int> numbers)
{
// parameter checks
if (numbers == null || numbers.Count() == 0) return 0;
int first = numbers.First();
if (first <= 1) return 0;
int g = (int)first;
if (g <= 1) return g;
int i = 0;
foreach (int n in numbers)
{
if (i == 0)
g = n;
else
g = GCD(n, g);
if (g <= 1) return g;
i++;
}
return g;
}
// Euclidian method with Euclidian Division,
// From: https://en.wikipedia.org/wiki/Euclidean_algorithm
private static int gcd_(int a, int b)
{
while (b != 0)
{
int t = b;
b = (a % b);
a = t;
}
return a;
}
}
Please let me know of any problems or questions you might have with this solution.
Oops, forgot that I was also using this class:
public class SizeCount
{
public Vector2D Size;
public int ItemCount;
public SizeCount(Vector2D itemSize, int itemCount)
{
Size = itemSize;
ItemCount = itemCount;
}
}
As I mentioned in the comments, it would actually be pretty easy to factor this class out of the code, but it's still in there right now.

Efficient tuple search algorithm

Given a store of 3-tuples where:
All elements are numeric ex :( 1, 3, 4) (1300, 3, 15) (1300, 3, 15) …
Tuples are removed and added frequently
At any time the store is typically under 100,000 elements
All Tuples are available in memory
The application is interactive requiring 100s of searches per second.
What are the most efficient algorithms/data structures to perform wild card (*) searches such as:
(1, *, 6) (3601, *, *) (*, 1935, *)
The aim is to have a Linda like tuple space but on an application level
Well, there are only 8 possible arrangements of wildcards, so you can easily construct 6 multi-maps and a set to serve as indices: one for each arrangement of wildcards in the query. You don't need an 8th index because the query (*,*,*) trivially returns all tuples. The set is for tuples with no wildcards; only a membership test is needed in this case.
A multimap takes a key to a set. In your example, e.g., the query (1,*,6) would consult the multimap for queries of the form (X,*,Y), which takes key <X,Y> to the set of all tuples with X in the first position and Y in third. In this case, X=1 and Y=6.
With any reasonable hash-based multimap implementation, lookups ought to be very fast. Several hundred a second ought to be easy, and several thousand per second doable (with e.g a contemporary x86 CPU).
Insertions and deletions require updating the maps and set. Again this ought to be reasonably fast, though not as fast as lookups of course. Again several hundred per second ought to be doable.
With only ~10^5 tuples, this approach ought to be fine for memory as well. You can save a bit of space with tricks, e.g. keeping a single copy of each tuple in an array and storing indices in the map/set to represent both key and value. Manage array slots with a free list.
To make this concrete, here is pseudocode. I'm going to use angle brackets <a,b,c> for tuples to avoid too many parens:
# Definitions
For a query Q <k2,k1,k0> where each of k_i is either * or an integer,
Let I(Q) be a 3-digit binary number b2|b1|b0 where
b_i=0 if k_i is * and 1 if k_i is an integer.
Let N(i) be the number of 1's in the binary representation of i
Let M(i) be a multimap taking a tuple with N(i) elements to a set
of tuples with 3 elements.
Let t be a 3 element tuple. Then T(t,i) returns a new tuple with
only the elements of t in positions where i has a 1. For example
T(<1,2,3>,0) = <> and T(<1,2,3>,6) = <2,3>
Note that function T works fine on query tuples with wildcards.
# Algorithm to insert tuple T into the database:
fun insert(t)
for i = 0 to 7
add the entry T(t,i)->t to M(i)
# Algorithm to delete tuple T from the database:
fun delete(t)
for i = 0 to 7
delete the entry T(t,i)->t from M(i)
# Query algorithm
fun query(Q)
let i = I(Q)
return M(i).lookup(T(Q, i)) # lookup failure returns empty set
Note that for simplicity, I've not shown the "optimizations" for M(0) and M(7). For M(0), the algorithm above would create a multimap taking the empty tuple to the set of all 3-tuples in the database. You can avoid this merely by treating i=0 as a special case. Similarly M(7) would take each tuple to a set containing only itself.
An "optimized" version:
fun insert(t)
for i = 1 to 6
add the entry T(t,i)->t to M(i)
add t to set S
fun delete(t)
for i = 1 to 6
delete the entry T(t,i)->t from M(i)
remove t from set S
fun query(Q)
let i = I(Q)
if i = 0, return S
elsif i = 7 return if Q\in S { Q } else {}
else return M(i).lookup(T(Q, i))
Addition
For fun, a Java implementation:
package hacking;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Random;
import java.util.Scanner;
import java.util.Set;
public class Hacking {
public static void main(String [] args) {
TupleDatabase db = new TupleDatabase();
int n = 200000;
long start = System.nanoTime();
for (int i = 0; i < n; ++i) {
db.insert(db.randomTriple());
}
long stop = System.nanoTime();
double elapsedSec = (stop - start) * 1e-9;
System.out.println("Inserted " + n + " tuples in " + elapsedSec
+ " seconds (" + (elapsedSec / n * 1000.0) + "ms per insert).");
Scanner in = new Scanner(System.in);
for (;;) {
System.out.print("Query: ");
int a = in.nextInt();
int b = in.nextInt();
int c = in.nextInt();
System.out.println(db.query(new Tuple(a, b, c)));
}
}
}
class Tuple {
static final int [] N_ONES = new int[] { 0, 1, 1, 2, 1, 2, 2, 3 };
static final int STAR = -1;
final int [] vals;
Tuple(int a, int b, int c) {
vals = new int[] { a, b, c };
}
Tuple(Tuple t, int code) {
vals = new int[N_ONES[code]];
int m = 0;
for (int k = 0; k < 3; ++k) {
if (((1 << k) & code) > 0) {
vals[m++] = t.vals[k];
}
}
}
#Override
public boolean equals(Object other) {
if (other instanceof Tuple) {
Tuple triple = (Tuple) other;
return Arrays.equals(this.vals, triple.vals);
}
return false;
}
#Override
public int hashCode() {
return Arrays.hashCode(this.vals);
}
#Override
public String toString() {
return Arrays.toString(vals);
}
int code() {
int c = 0;
for (int k = 0; k < 3; k++) {
if (vals[k] != STAR) {
c |= (1 << k);
}
}
return c;
}
Set<Tuple> setOf() {
Set<Tuple> s = new HashSet<>();
s.add(this);
return s;
}
}
class Multimap extends HashMap<Tuple, Set<Tuple>> {
#Override
public Set<Tuple> get(Object key) {
Set<Tuple> r = super.get(key);
return r == null ? Collections.<Tuple>emptySet() : r;
}
void put(Tuple key, Tuple value) {
if (containsKey(key)) {
super.get(key).add(value);
} else {
super.put(key, value.setOf());
}
}
void remove(Tuple key, Tuple value) {
Set<Tuple> set = super.get(key);
set.remove(value);
if (set.isEmpty()) {
super.remove(key);
}
}
}
class TupleDatabase {
final Set<Tuple> set;
final Multimap [] maps;
TupleDatabase() {
set = new HashSet<>();
maps = new Multimap[7];
for (int i = 1; i < 7; i++) {
maps[i] = new Multimap();
}
}
void insert(Tuple t) {
set.add(t);
for (int i = 1; i < 7; i++) {
maps[i].put(new Tuple(t, i), t);
}
}
void delete(Tuple t) {
set.remove(t);
for (int i = 1; i < 7; i++) {
maps[i].remove(new Tuple(t, i), t);
}
}
Set<Tuple> query(Tuple q) {
int c = q.code();
switch (c) {
case 0: return set;
case 7: return set.contains(q) ? q.setOf() : Collections.<Tuple>emptySet();
default: return maps[c].get(new Tuple(q, c));
}
}
Random gen = new Random();
int randPositive() {
return gen.nextInt(1000);
}
Tuple randomTriple() {
return new Tuple(randPositive(), randPositive(), randPositive());
}
}
Some output:
Inserted 200000 tuples in 2.981607358 seconds (0.014908036790000002ms per insert).
Query: -1 -1 -1
[[504, 296, 987], [500, 446, 184], [499, 482, 16], [488, 823, 40], ...
Query: 500 446 -1
[[500, 446, 184], [500, 446, 762]]
Query: -1 -1 500
[[297, 56, 500], [848, 185, 500], [556, 351, 500], [779, 986, 500], [935, 279, 500], ...
If you think of the tuples like a ip address, then a radix tree (trie) type structure might work. Radix tree is used for IP discovery.
Another way maybe to calculate use bit operations and calculate a bit hash for the tuple and in your search do bit (or, and) for quick discovery.

How can I divide 24 music albums into 6 playlists such that the running time / length is as evenly distributed as possible?

NOTE: I plan to implement this using Java, but any plain-English explanation of the steps needed in the logic is welcome and appreciated.
I'm trying to come up with a way to divide a group of 24 music albums/records into 6 playlists such that the lengths/running time all 6 playlists are as close to each other as possible.
I initially thought that maybe I could find all possible permutations of the problem and then work out a logic that will analyze which is the best division. I even created a thread to ask for help for it yesterday (I have 24 items that I need to separate into 6 sets of 4. What algorithm can I use to find all possible combinations?). However, when I got close to finding a solution, I realized that just finding all permutations of the problem will take incredibly long to run, so that approach seem impractical.
So I was wondering, is there a faster way to approach such a problem?
Given that these are the running times of the albums in question (in MM:SS format), what is the fastes way for me to find the division of albums into 6 playlists of 4 such that the lengths of each of the playlists is as close to each other as possible?
39:03
41:08
41:39
42:54
44:31
44:34
44:40
45:55
45:59
47:06
47:20
47:53
49:35
49:57
50:15
51:35
51:50
55:45
58:10
58:11
59:48
59:58
60:00
61:08
I did the math and considering the total time for all the albums, having 6 playlists that run at 200 minutes and 49 seconds would be perfect... but since the individual album lengths probably don't allow for that exact of a division, what would be the most exact possible division is my question.
NOTE: I could actually do this manually and get a close enough approximation that would suffice, but I am still really interested on how it could be done through a program.
Thanks!
I'd suggest you to use a Simulated annealing algorithm
And here is one nice solution derived by this algorithm:
[17, 16, 15, 9] 199:39
[3, 14, 10, 24] 199:50
[6, 8, 13, 21] 199:52
[1, 5, 20, 19] 199:55
[4, 23, 2, 18] 199:47
[11, 7, 22, 12] 199:51
As Steven Skiena pointed in his book ("The Algorithm Design Manual"), that it is very helpful to use Simulated annealing metaheuristic for finding acceptable solutions in real life combinatorial problems.
So, as you mentioned, you need to put 4 tracks in each of 6 albums such that all albums will have approximately the same duration.
Firstly lets consider - which property do we need to optimize?
From my point of view - the most suitable formulation would be: minimize standard deviation of durations of all albums. (But, if needed - you are free to include any other, more complex, properties).
Let's name the value of an optimized property as energy.
The main idea of algorithm
Each state of our system is characterized by some value of energy. By performing some actions over the system we can change its state (e.g. Swapping tracks between different albums).
Also, we have some abstract property called temperature.
When system is under high temperature, it is free to change its state to another state, even if the new state has a higher value of energy.
But when the temperature is small, the system tends to change its state mostly to new states with lower values of energy.
By physical analogy, the probability of changing the current state of system to a state with a higher value of energy can be limited in the same way as Boltzmann distribution defines.
Here is an illustration of how the standard deviation of durations changed while deriving the solution from above
Here is a full Java implementation of algorithm, which gives the solution from above
import java.util.Arrays;
import java.util.Random;
public class SimulatedAnnealingTracksOrdering {
public static void main(String[] args) {
int albumsCount = 6;
int tracksInAlbum = 4;
Album[] result = generateOptimalTracksOrdering(
tracksInAlbum,
albumsCount,
new Track[] {
new Track(1, "39:03"), new Track(2, "41:08"),
new Track(3, "41:39"), new Track(4, "42:54"),
new Track(5, "44:31"), new Track(6, "44:34"),
new Track(7, "44:40"), new Track(8, "45:55"),
new Track(9, "45:59"), new Track(10, "47:06"),
new Track(11, "47:20"), new Track(12, "47:53"),
new Track(13, "49:35"), new Track(14, "49:57"),
new Track(15, "50:15"), new Track(16, "51:35"),
new Track(17, "51:50"), new Track(18, "55:45"),
new Track(19, "58:10"), new Track(20, "58:11"),
new Track(21, "59:48"), new Track(22, "59:58"),
new Track(23, "60:00"), new Track(24, "61:08"),
});
for (Album album : result) {
System.out.println(album);
}
}
private static Album[] generateOptimalTracksOrdering(
int tracksInAlbum, int albumsCount, Track[] tracks) {
// Initialize current solution
Albums currentSolution =
new Albums(tracksInAlbum, albumsCount, tracks);
// Initialize energy of a current solution
double currentEnergy =
currentSolution.albumsDurationStandartDeviation();
System.out.println("Current energy is: " + currentEnergy);
// Also, we will memorize the solution with smallest value of energy
Albums bestSolution = currentSolution.clone();
double bestEnergy = currentEnergy;
// Constant, which defines the minimal value of energy
double minEnergy = 0.1;
// Initial temperature
double temperature = 150;
// We will decrease value of temperature, by multiplying on this
// coefficient
double alpha = 0.999;
// Constant, which defines minimal value of temperature
double minTemperature = 0.1;
// For each value of temperature - we will perform few probes, before
// decreasing temperature
int numberOfProbes = 100;
Random random = new Random(1);
while ((temperature > minTemperature)
&& (currentEnergy > minEnergy)) {
for (int i = 0; i < numberOfProbes; i++) {
// Generating new state
currentSolution.randomTracksPermutation();
double newEnergy =
currentSolution.albumsDurationStandartDeviation();
// As defined by Boltzmann distribution
double acceptanceProbability =
Math.exp(-(newEnergy - currentEnergy) / temperature);
// States with smaller energy - will be accepted always
if ((newEnergy < currentEnergy)
|| (random.nextDouble() < acceptanceProbability)) {
currentEnergy = newEnergy;
System.out.println("Current energy is: " + currentEnergy);
if (newEnergy < bestEnergy) {
bestSolution = currentSolution.clone();
bestEnergy = newEnergy;
}
} else {
// If state can't be accepted - rollback to previous state
currentSolution.undoLastPermutation();
}
}
// Decreasing temperature
temperature *= alpha;
}
// Return best solution
return bestSolution.getAlbums();
}
}
/**
* Container for bunch of albums
*/
class Albums {
private Random random = new Random(1);
private Album[] albums;
// These fields, are used for memorizing last permutation
// (needed for rollbacking)
private Album sourceAlbum;
private int sourceIndex;
private Album targetAlbum;
private int targetIndex;
public Albums(int tracksInAlbum, int albumsCount, Track[] tracks) {
// Put all tracks to albums
this.albums = new Album[albumsCount];
int t = 0;
for (int i = 0; i < albumsCount; i++) {
this.albums[i] = new Album(tracksInAlbum);
for (int j = 0; j < tracksInAlbum; j++) {
this.albums[i].set(j, tracks[t]);
t++;
}
}
}
/**
* Calculating standard deviations of albums durations
*/
public double albumsDurationStandartDeviation() {
double sumDuration = 0;
for (Album album : this.albums) {
sumDuration += album.getDuraion();
}
double meanDuration =
sumDuration / this.albums.length;
double sumSquareDeviation = 0;
for (Album album : this.albums) {
sumSquareDeviation +=
Math.pow(album.getDuraion() - meanDuration, 2);
}
return Math.sqrt(sumSquareDeviation / this.albums.length);
}
/**
* Performing swapping of random tracks between random albums
*/
public void randomTracksPermutation() {
this.sourceAlbum = this.pickRandomAlbum();
this.sourceIndex =
this.random.nextInt(this.sourceAlbum.getTracksCount());
this.targetAlbum = this.pickRandomAlbum();
this.targetIndex =
this.random.nextInt(this.targetAlbum.getTracksCount());
this.swapTracks();
}
public void undoLastPermutation() {
this.swapTracks();
}
private void swapTracks() {
Track sourceTrack = this.sourceAlbum.get(this.sourceIndex);
Track targetTrack = this.targetAlbum.get(this.targetIndex);
this.sourceAlbum.set(this.sourceIndex, targetTrack);
this.targetAlbum.set(this.targetIndex, sourceTrack);
}
private Album pickRandomAlbum() {
int index = this.random.nextInt(this.albums.length);
return this.albums[index];
}
public Album[] getAlbums() {
return this.albums;
}
private Albums() {
// Needed for clonning
}
#Override
protected Albums clone() {
Albums cloned = new Albums();
cloned.albums = new Album[this.albums.length];
for (int i = 0; i < this.albums.length; i++) {
cloned.albums[i] = this.albums[i].clone();
}
return cloned;
}
}
/**
* Container for tracks
*/
class Album {
private Track[] tracks;
public Album(int size) {
this.tracks = new Track[size];
}
/**
* Duration of album == sum of durations of tracks
*/
public int getDuraion() {
int acc = 0;
for (Track track : this.tracks) {
acc += track.getDuration();
}
return acc;
}
public Track get(int trackNum) {
return this.tracks[trackNum];
}
public void set(int trackNum, Track track) {
this.tracks[trackNum] = track;
}
public int getTracksCount() {
return this.tracks.length;
}
public Track[] getTracks() {
return this.tracks;
}
#Override
protected Album clone() {
Album cloned = new Album(this.tracks.length);
for (int i = 0; i < this.tracks.length; i++) {
cloned.tracks[i] = this.tracks[i];
}
return cloned;
}
/**
* Displaying duration in MM:SS format
*/
#Override
public String toString() {
int duraion = this.getDuraion();
String duration_MM_SS = (duraion / 60) + ":" + (duraion % 60);
return Arrays.toString(this.tracks) + "\t" + duration_MM_SS;
}
}
class Track {
private final int id;
private final int durationInSeconds;
public Track(int id, String duration_MM_SS) {
this.id = id;
this.durationInSeconds =
this.parseDuration(duration_MM_SS);
}
/**
* Converting MM:SS duration to seconds
*/
private int parseDuration(String duration_MM_SS) {
String[] parts = duration_MM_SS.split(":");
return (Integer.parseInt(parts[0]) * 60)
+ Integer.parseInt(parts[1]);
}
public int getDuration() {
return this.durationInSeconds;
}
public int getId() {
return this.id;
}
#Override
public String toString() {
return Integer.toString(this.id);
}
}
This is equivalent* to multiprocessor scheduling if you consider each album as a job and each playlist as a processor, and finding an optimal solution is NP-hard.
However, there are efficient algorithms which give decent but not necessarily optimal results. For example, sorting the albums by length and repeatedly adding the longest album to the shortest playlist.
If we number the albums from 1 to 24 from shortest to longest, this algorithm gives the following division.
{24, 13, 9, 6} (201:16)
{23, 14, 12, 2} (198:58)
{22, 15, 10, 4} (200:13)
{21, 16, 8, 5} (201:49)
{20, 17, 11, 3} (199:00)
{19, 18, 7, 1} (197:38)
* If we consider "evenly distributed" to mean that the length of the longest playlist is minimized.
With a more intelligent search algorithm than brute force, we don't have to go through all 1e12 possibilities. First we convert the input, enumerate all sets of four, and sort them by their proximity to the target time.
import heapq
import itertools
import re
def secondsfromstring(s):
minutes, seconds = s.split(':')
return int(minutes) * 60 + int(seconds)
def stringfromseconds(seconds):
minutes, seconds = divmod(seconds, 60)
return '{}:{:02}'.format(minutes, seconds)
# for simplicity, these have to be pairwise distinct
stringtimes = '''39:03 41:08 41:39 42:54 44:31 44:34
44:40 45:55 45:59 47:06 47:20 47:53
49:35 49:57 50:15 51:35 51:50 55:45
58:10 58:11 59:48 59:58 60:00 61:08'''
times = [secondsfromstring(s) for s in stringtimes.split()]
quads = [frozenset(quad) for quad in itertools.combinations(times, 4)]
targettime = sum(times) / 6
quads.sort(key=lambda quad: abs(sum(quad) - targettime))
Now comes a search. We keep a priority queue with partial solutions, ordered by the minimum possible maximum deviation from the target time. The priority queue lets us explore the most promising partial solutions first.
queue = [(0, frozenset(times), [])]
while True:
i, remaining, sofar = heapq.heappop(queue)
if not remaining:
for quad in sofar:
print(stringfromseconds(sum(quad)), ':',
*(stringfromseconds(time) for time in quad))
break
while i < len(quads):
quad = quads[i]
if quad.issubset(remaining):
heapq.heappush(queue, (i + 1, remaining, sofar))
heapq.heappush(queue, (i + 1, remaining - quad, sofar + [quad]))
break
i += 1
In a couple seconds, this code spits out the following optimal answer. (We got lucky, since this code is working on the slightly modified objective of minimizing the maximum deviation from the target time; with the more complicated program below, we can minimize the difference between minimum and maximum, which turns out to be the same grouping.)
199:50 : 47:06 41:39 61:08 49:57
199:52 : 44:34 45:55 59:48 49:35
199:45 : 55:45 41:08 59:58 42:54
199:53 : 44:40 47:20 60:00 47:53
199:55 : 58:10 44:31 58:11 39:03
199:39 : 51:35 50:15 51:50 45:59
The program that optimizes the max minus min objective is below. Compared to the above program, it doesn't stop after the first solution but instead waits until we start considering sets whose deviation from the target is greater than the smallest max minus min of solutions that we have found so far. Then it outputs the best.
import heapq
import itertools
import re
def secondsfromstring(s):
minutes, seconds = s.split(':')
return int(minutes) * 60 + int(seconds)
def stringfromseconds(seconds):
minutes, seconds = divmod(seconds, 60)
return '{}:{:02}'.format(minutes, seconds)
# for simplicity, these have to be pairwise distinct
stringtimes = '''39:03 41:08 41:39 42:54 44:31 44:34
44:40 45:55 45:59 47:06 47:20 47:53
49:35 49:57 50:15 51:35 51:50 55:45
58:10 58:11 59:48 59:58 60:00 61:08'''
times = [secondsfromstring(s) for s in stringtimes.split()]
quads = [frozenset(quad) for quad in itertools.combinations(times, 4)]
targettime = sum(times) / 6
quads.sort(key=lambda quad: abs(sum(quad) - targettime))
def span(solution):
quadtimes = [sum(quad) for quad in solution]
return max(quadtimes) - min(quadtimes)
candidates = []
bound = None
queue = [(0, frozenset(times), [])]
while True:
i, remaining, sofar = heapq.heappop(queue)
if not remaining:
candidates.append(sofar)
newbound = span(sofar)
if bound is None or newbound < bound: bound = newbound
if bound is not None and abs(sum(quads[i]) - targettime) >= bound: break
while i < len(quads):
quad = quads[i]
i += 1
if quad.issubset(remaining):
heapq.heappush(queue, (i, remaining, sofar))
heapq.heappush(queue, (i, remaining - quad, sofar + [quad]))
break
best = min(candidates, key=span)
for quad in best:
print(stringfromseconds(sum(quad)), ':',
*(stringfromseconds(time) for time in quad))
This can be a comment too, but it is too long, so I post it as an answer. This is a little easy-to-code improvement to hammar's solution. This algorithm gives You no optimal solution, but it finds a better one.
You can start with hammar's greedy algorithm to fill the playlists with albums. Next all You have to do is looping through them few times.
Let D the difference between the total length of playlist A and playlist B, where length(A)>length(B). Loop through playlists A and B and if you find albums x \in A and y \in B which statisfies x>y && x-y<D, swap x and y.
This gave me the following results:
{7,8,22,13} 200:8
{2,11,24,15} 199:51
{4,10,23,14} 199:57
{3,17,12,19} 199:32
{5,16,21,6} 200:28
{1,18,9,20} 198:58
For what it's worth, before stemm posted his Simulated annealing algorithm, I already had a solution which came to a close enough approximation of what I was looking for. So, just to share another (albeit relatively crude) approach, here it is:
(1) I sorted all of the individual track times from shortest to longest.
#01 - 39:03
#02 - 41:08
#03 - 41:39
#04 - 42:54
#05 - 44:31
#06 - 44:34
#07 - 44:40
#08 - 45:55
#09 - 45:59
#10 - 47:06
#11 - 47:20
#12 - 47:53
#13 - 49:35
#14 - 49:57
#15 - 50:15
#16 - 51:35
#17 - 51:50
#18 - 55:45
#19 - 58:10
#20 - 58:11
#21 - 59:48
#22 - 59:58
#23 - 60:00
#24 - 61:08
(2) I grouped into groups of 4 such that the resulting playlists would theoretically have as short a distance to each other just based on their positions on the sorted list, and that grouping would be: the shortest item + the longest item + the middle two items... then from the remaining (still sorted) list, again the shortest + the longest + the middle two items... so on and so forth until all items are grouped together.
(3) Using the resulting sets of 4 now, I iterate through all the individual albums in each of the sets, swapping out each item for every other item from the other sets. If, when swapped out, the distance of their source sets become smaller, then commit the swap. Else, skip the swap. Then I just did that repeatedly until no two albums can be swapped anymore to lessen the distance between their parent sets.
To illustrate step #3 in code:
Partitioner(){
for(int i=0;i<total.length;i++)
total[i] = 0;
// get the total time for all sets of four albums
for(int i=0;i<sets.length;i++){
for(int j=0;j<sets[i].length;j++){
total[i] = total[i] + sets[i][j];
}
}
// this method will be called recursively until no more swaps can be done
swapper();
for(int i=0;i<sets.length;i++){
for(int j=0;j<sets[i].length;j++){
System.out.println(sets[i][j]);
}
}
}
void swapper(){
int count = 0;
for(int i=0;i<sets.length;i++){ // loop through all the sets
for(int j=0;j<sets[i].length;j++){ // loop through all the album in a set
for(int k=0;k<sets.length;k++){ // for every album, loop through all the other sets for comparison
if(k!=i){ // but of course, don't compare it to its own set
for(int l=0;l<sets[k].length;l++){ // pair the album (sets[i][j]) with every album from the other sets
int parentA = total[i]; // store the total length from the parent set of album sets[i][j]
int parentB = total[k]; // store the total length from the parent set of the album being compared
int origDist = Math.abs(parentA - parentB); // measure the original distance between those two sets
int newA = total[i] - sets[i][j] + sets[k][l]; //compute new length of "parentA" if the two albums were swapped
int newB = total[k] - sets[k][l] + sets[i][j]; //compute new length of "parentB" if the two albums were swapped
int newdist = Math.abs(newA - newB); //compute new distance if the two albums were swapped
if(newdist<origDist){ // if the new resulting distance is less than the original distance, commit the swap
int temp = sets[i][j];
sets[i][j] = sets[k][l];
sets[k][l] = temp;
total[i] = newA;
total[k] = newB;
count++;
}
}
}
}
}
}
System.out.println(count);
if (count > 0 ){
swapper();
}
}

Dynamic programming: Algorithm to solve the following?

I have recently completed the following interview exercise:
'A robot can be programmed to run "a", "b", "c"... "n" kilometers and it takes ta, tb, tc... tn minutes, respectively. Once it runs to programmed kilometers, it must be turned off for "m" minutes.
After "m" minutes it can again be programmed to run for a further "a", "b", "c"... "n" kilometers.
How would you program this robot to go an exact number of kilometers in the minimum amount of time?'
I thought it was a variation of the unbounded knapsack problem, in which the size would be the number of kilometers and the value, the time needed to complete each stretch. The main difference is that we need to minimise, rather than maximise, the value. So I used the equivalent of the following solution: http://en.wikipedia.org/wiki/Knapsack_problem#Unbounded_knapsack_problem
in which I select the minimum.
Finally, because we need an exact solution (if there is one), over the map constructed by the algorithm for all the different distances, I iterated through each and trough each robot's programmed distance to find the exact distance and minimum time among those.
I think the pause the robot takes between runs is a bit of a red herring and you just need to include it in your calculations, but it does not affect the approach taken.
I am probably wrong, because I failed the test. I don't have any other feedback as to the expected solution.
Edit: maybe I wasn't wrong after all and I failed for different reasons. I just wanted to validate my approach to this problem.
import static com.google.common.collect.Sets.*;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.apache.log4j.Logger;
import com.google.common.base.Objects;
import com.google.common.base.Preconditions;
import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
public final class Robot {
static final Logger logger = Logger.getLogger (Robot.class);
private Set<ProgrammedRun> programmedRuns;
private int pause;
private int totalDistance;
private Robot () {
//don't expose default constructor & prevent subclassing
}
private Robot (int[] programmedDistances, int[] timesPerDistance, int pause, int totalDistance) {
this.programmedRuns = newHashSet ();
for (int i = 0; i < programmedDistances.length; i++) {
this.programmedRuns.add (new ProgrammedRun (programmedDistances [i], timesPerDistance [i] ) );
}
this.pause = pause;
this.totalDistance = totalDistance;
}
public static Robot create (int[] programmedDistances, int[] timesPerDistance, int pause, int totalDistance) {
Preconditions.checkArgument (programmedDistances.length == timesPerDistance.length);
Preconditions.checkArgument (pause >= 0);
Preconditions.checkArgument (totalDistance >= 0);
return new Robot (programmedDistances, timesPerDistance, pause, totalDistance);
}
/**
* #returns null if no strategy was found. An empty map if distance is zero. A
* map with the programmed runs as keys and number of time they need to be run
* as value.
*
*/
Map<ProgrammedRun, Integer> calculateOptimalStrategy () {
//for efficiency, consider this case first
if (this.totalDistance == 0) {
return Maps.newHashMap ();
}
//list of solutions for different distances. Element "i" of the list is the best set of runs that cover at least "i" kilometers
List <Map<ProgrammedRun, Integer>> runsForDistances = Lists.newArrayList();
//special case i = 0 -> empty map (no runs needed)
runsForDistances.add (new HashMap<ProgrammedRun, Integer> () );
for (int i = 1; i <= totalDistance; i++) {
Map<ProgrammedRun, Integer> map = new HashMap<ProgrammedRun, Integer> ();
int minimumTime = -1;
for (ProgrammedRun pr : programmedRuns) {
int distance = Math.max (0, i - pr.getDistance ());
int time = getTotalTime (runsForDistances.get (distance) ) + pause + pr.getTime();
if (minimumTime < 0 || time < minimumTime) {
minimumTime = time;
//new minimum found
map = new HashMap<ProgrammedRun, Integer> ();
map.putAll(runsForDistances.get (distance) );
//increase count
Integer num = map.get (pr);
if (num == null) num = Integer.valueOf (1);
else num++;
//update map
map.put (pr, num);
}
}
runsForDistances.add (map );
}
//last step: calculate the combination with exact distance
int minimumTime2 = -1;
int bestIndex = -1;
for (int i = 0; i <= totalDistance; i++) {
if (getTotalDistance (runsForDistances.get (i) ) == this.totalDistance ) {
int time = getTotalTime (runsForDistances.get (i) );
if (time > 0) time -= pause;
if (minimumTime2 < 0 || time < minimumTime2 ) {
minimumTime2 = time;
bestIndex = i;
}
}
}
//if solution found
if (bestIndex != -1) {
return runsForDistances.get (bestIndex);
}
//try all combinations, since none of the existing maps run for the exact distance
List <Map<ProgrammedRun, Integer>> exactRuns = Lists.newArrayList();
for (int i = 0; i <= totalDistance; i++) {
int distance = getTotalDistance (runsForDistances.get (i) );
for (ProgrammedRun pr : programmedRuns) {
//solution found
if (distance + pr.getDistance() == this.totalDistance ) {
Map<ProgrammedRun, Integer> map = new HashMap<ProgrammedRun, Integer> ();
map.putAll (runsForDistances.get (i));
//increase count
Integer num = map.get (pr);
if (num == null) num = Integer.valueOf (1);
else num++;
//update map
map.put (pr, num);
exactRuns.add (map);
}
}
}
if (exactRuns.isEmpty()) return null;
//finally return the map with the best time
minimumTime2 = -1;
Map<ProgrammedRun, Integer> bestMap = null;
for (Map<ProgrammedRun, Integer> m : exactRuns) {
int time = getTotalTime (m);
if (time > 0) time -= pause; //remove last pause
if (minimumTime2 < 0 || time < minimumTime2 ) {
minimumTime2 = time;
bestMap = m;
}
}
return bestMap;
}
private int getTotalTime (Map<ProgrammedRun, Integer> runs) {
int time = 0;
for (Map.Entry<ProgrammedRun, Integer> runEntry : runs.entrySet()) {
time += runEntry.getValue () * runEntry.getKey().getTime ();
//add pauses
time += this.pause * runEntry.getValue ();
}
return time;
}
private int getTotalDistance (Map<ProgrammedRun, Integer> runs) {
int distance = 0;
for (Map.Entry<ProgrammedRun, Integer> runEntry : runs.entrySet()) {
distance += runEntry.getValue() * runEntry.getKey().getDistance ();
}
return distance;
}
class ProgrammedRun {
private int distance;
private int time;
private transient float speed;
ProgrammedRun (int distance, int time) {
this.distance = distance;
this.time = time;
this.speed = (float) distance / time;
}
#Override public String toString () {
return "(distance =" + distance + "; time=" + time + ")";
}
#Override public boolean equals (Object other) {
return other instanceof ProgrammedRun
&& this.distance == ((ProgrammedRun)other).distance
&& this.time == ((ProgrammedRun)other).time;
}
#Override public int hashCode () {
return Objects.hashCode (Integer.valueOf (this.distance), Integer.valueOf (this.time));
}
int getDistance() {
return distance;
}
int getTime() {
return time;
}
float getSpeed() {
return speed;
}
}
}
public class Main {
/* Input variables for the robot */
private static int [] programmedDistances = {1, 2, 3, 5, 10}; //in kilometers
private static int [] timesPerDistance = {10, 5, 3, 2, 1}; //in minutes
private static int pause = 2; //in minutes
private static int totalDistance = 41; //in kilometers
/**
* #param args
*/
public static void main(String[] args) {
Robot r = Robot.create (programmedDistances, timesPerDistance, pause, totalDistance);
Map<ProgrammedRun, Integer> strategy = r.calculateOptimalStrategy ();
if (strategy == null) {
System.out.println ("No strategy that matches the conditions was found");
} else if (strategy.isEmpty ()) {
System.out.println ("No need to run; distance is zero");
} else {
System.out.println ("Strategy found:");
System.out.println (strategy);
}
}
}
Simplifying slightly, let ti be the time (including downtime) that it takes the robot to run distance di. Assume that t1/d1 ≤ … ≤ tn/dn. If t1/d1 is significantly smaller than t2/d2 and d1 and the total distance D to be run are large, then branch and bound likely outperforms dynamic programming. Branch and bound solves the integer programming formulation
minimize ∑i ti xi
subject to
∑i di xi = D
∀i xi &in; N
by using the value of the relaxation where xi can be any nonnegative real as a guide. The latter is easily verified to be at most (t1/d1)D, by setting x1 to D/d1 and ∀i ≠ 1 xi = 0, and at least (t1/d1)D, by setting the sole variable of the dual program to t1/d1. Solving the relaxation is the bound step; every integer solution is a fractional solution, so the best integer solution requires time at least (t1/d1)D.
The branch step takes one integer program and splits it in two whose solutions, taken together, cover the entire solution space of the original. In this case, one piece could have the extra constraint x1 = 0 and the other could have the extra constraint x1 ≥ 1. It might look as though this would create subproblems with side constraints, but in fact, we can just delete the first move, or decrease D by d1 and add the constant t1 to the objective. Another option for branching is to add either the constraint xi = ⌊D/di⌋ or xi ≤ ⌊D/di⌋ - 1, which requires generalizing to upper bounds on the number of repetitions of each move.
The main loop of branch and bound selects one of a collection of subproblems, branches, computes bounds for the two subproblems, and puts them back into the collection. The efficiency over brute force comes from the fact that, when we have a solution with a particular value, every subproblem whose relaxed value is at least that much can be thrown away. Once the collection is emptied this way, we have the optimal solution.
Hybrids of branch and bound and dynamic programming are possible, for example, computing optimal solutions for small D via DP and using those values instead of branching on subproblems that have been solved.
Create array of size m and for 0 to m( m is your distance) do:
a[i] = infinite;
a[0] = 0;
a[i] = min{min{a[i-j] + tj + m for all j in possible kilometers of robot. and j≠i} , ti if i is in possible moves of robot}
a[m] is lowest possible value. Also you can have array like b to save a[i]s selection. Also if a[m] == infinite means it's not possible.
Edit: we can solve it in another way by creating a digraph, again our graph is dependent to m length of path, graph has nodes labeled {0..m}, now start from node 0 connect it to all possible nodes; means if you have a kilometer i you can connect 0 and vi with weight ti, except for node 0->x, for all other nodes you should connect node i->j with weight tj-i + m for j>i and j-i is available in input kilometers. now you should find shortest path from v0 to vn. but this algorithm still is O(nm).
Let G be the desired distance run.
Let n be the longest possible distance run without pause.
Let L = G / n (Integer arithmetic, discard fraction part)
Let R = G mod n (ie. The remainder from the above division)
Make the robot run it's longest distance (ie. n) L times, and then whichever distance (a, b, c, etc.) is greater than R by the least amount (ie the smallest available distance that is equal to or greater than R)
Either I understood the problem wrong, or you're all over thinking it
I am a big believer in showing instead of telling. Here is a program that may be doing what you are looking for. Let me know if it satisfies your question. Simply copy, paste, and run the program. You should of course test with your own data set.
import java.util.Arrays;
public class Speed {
/***
*
* #param distance
* #param sprints ={{A,Ta},{B,Tb},{C,Tc}, ..., {N,Tn}}
*/
public static int getFastestTime(int distance, int[][] sprints){
long[] minTime = new long[distance+1];//distance from 0 to distance
Arrays.fill(minTime,Integer.MAX_VALUE);
minTime[0]=0;//key=distance; value=time
for(int[] speed: sprints)
for(int d=1; d<minTime.length; d++)
if(d>=speed[0] && minTime[d] > minTime[d-speed[0]]+speed[1])
minTime[d]=minTime[d-speed[0]]+speed[1];
return (int)minTime[distance];
}//
public static void main(String... args){
//sprints ={{A,Ta},{B,Tb},{C,Tc}, ..., {N,Tn}}
int[][] sprints={{3,2},{5,3},{7,5}};
int distance = 21;
System.out.println(getFastestTime(distance,sprints));
}
}

Roulette wheel selection algorithm [duplicate]

This question already has answers here:
Roulette Selection in Genetic Algorithms
(14 answers)
Closed 7 years ago.
Can anyone provide some pseudo code for a roulette selection function? How would I implement this: I don't really understand how to read this math notation.I want General algorithm to this.
The other answers seem to be assuming that you are trying to implement a roulette game. I think that you are asking about roulette wheel selection in evolutionary algorithms.
Here is some Java code that implements roulette wheel selection.
Assume you have 10 items to choose from and you choose by generating a random number between 0 and 1. You divide the range 0 to 1 up into ten non-overlapping segments, each proportional to the fitness of one of the ten items. For example, this might look like this:
0 - 0.3 is item 1
0.3 - 0.4 is item 2
0.4 - 0.5 is item 3
0.5 - 0.57 is item 4
0.57 - 0.63 is item 5
0.63 - 0.68 is item 6
0.68 - 0.8 is item 7
0.8 - 0.85 is item 8
0.85 - 0.98 is item 9
0.98 - 1 is item 10
This is your roulette wheel. Your random number between 0 and 1 is your spin. If the random number is 0.46, then the chosen item is item 3. If it's 0.92, then it's item 9.
Here is a bit of python code:
def roulette_select(population, fitnesses, num):
""" Roulette selection, implemented according to:
<http://stackoverflow.com/questions/177271/roulette
-selection-in-genetic-algorithms/177278#177278>
"""
total_fitness = float(sum(fitnesses))
rel_fitness = [f/total_fitness for f in fitnesses]
# Generate probability intervals for each individual
probs = [sum(rel_fitness[:i+1]) for i in range(len(rel_fitness))]
# Draw new population
new_population = []
for n in xrange(num):
r = rand()
for (i, individual) in enumerate(population):
if r <= probs[i]:
new_population.append(individual)
break
return new_population
First, generate an array of the percentages you assigned, let's say p[1..n]
and assume the total is the sum of all the percentages.
Then get a random number between 1 to total, let's say r
Now, the algorithm in lua:
local c = 0
for i = 1,n do
c = c + p[i]
if r <= c then
return i
end
end
There are 2 steps to this: First create an array with all the values on the wheel. This can be a 2 dimensional array with colour as well as number, or you can choose to add 100 to red numbers.
Then simply generate a random number between 0 or 1 (depending on whether your language starts numbering array indexes from 0 or 1) and the last element in your array.
Most languages have built-in random number functions. In VB and VBScript the function is RND(). In Javascript it is Math.random()
Fetch the value from that position in the array and you have your random roulette number.
Final note: don't forget to seed your random number generator or you will get the same sequence of draws every time you run the program.
Here is a really quick way to do it using stream selection in Java. It selects the indices of an array using the values as weights. No cumulative weights needed due to the mathematical properties.
static int selectRandomWeighted(double[] wts, Random rnd) {
int selected = 0;
double total = wts[0];
for( int i = 1; i < wts.length; i++ ) {
total += wts[i];
if( rnd.nextDouble() <= (wts[i] / total)) selected = i;
}
return selected;
}
This could be further improved using Kahan summation or reading through the doubles as an iterable if the array was too big to initialize at once.
I wanted the same and so created this self-contained Roulette class. You give it a series of weights (in the form of a double array), and it will simply return an index from that array according to a weighted random pick.
I created a class because you can get a big speed up by only doing the cumulative additions once via the constructor. It's C# code, but enjoy the C like speed and simplicity!
class Roulette
{
double[] c;
double total;
Random random;
public Roulette(double[] n) {
random = new Random();
total = 0;
c = new double[n.Length+1];
c[0] = 0;
// Create cumulative values for later:
for (int i = 0; i < n.Length; i++) {
c[i+1] = c[i] + n[i];
total += n[i];
}
}
public int spin() {
double r = random.NextDouble() * total; // Create a random number between 0 and 1 and times by the total we calculated earlier.
//int j; for (j = 0; j < c.Length; j++) if (c[j] > r) break; return j-1; // Don't use this - it's slower than the binary search below.
//// Binary search for efficiency. Objective is to find index of the number just above r:
int a = 0;
int b = c.Length - 1;
while (b - a > 1) {
int mid = (a + b) / 2;
if (c[mid] > r) b = mid;
else a = mid;
}
return a;
}
}
The initial weights are up to you. Maybe it could be the fitness of each member, or a value inversely proportional to the member's position in the "top 50". E.g.: 1st place = 1.0 weighting, 2nd place = 0.5, 3rd place = 0.333, 4th place = 0.25 weighting etc. etc.
Well, for an American Roulette wheel, you're going to need to generate a random integer between 1 and 38. There are 36 numbers, a 0, and a 00.
One of the big things to consider, though, is that in American roulette, their are many different bets that can be made. A single bet can cover 1, 2, 3, 4, 5, 6, two different 12s, or 18. You may wish to create a list of lists where each number has additional flages to simplify that, or do it all in the programming.
If I were implementing it in Python, I would just create a Tuple of 0, 00, and 1 through 36 and use random.choice() for each spin.
This assumes some class "Classifier" which just has a String condition, String message, and double strength. Just follow the logic.
-- Paul
public static List<Classifier> rouletteSelection(int classifiers) {
List<Classifier> classifierList = new LinkedList<Classifier>();
double strengthSum = 0.0;
double probabilitySum = 0.0;
// add up the strengths of the map
Set<String> keySet = ClassifierMap.CLASSIFIER_MAP.keySet();
for (String key : keySet) {
/* used for debug to make sure wheel is working.
if (strengthSum == 0.0) {
ClassifierMap.CLASSIFIER_MAP.get(key).setStrength(8000.0);
}
*/
Classifier classifier = ClassifierMap.CLASSIFIER_MAP.get(key);
double strength = classifier.getStrength();
strengthSum = strengthSum + strength;
}
System.out.println("strengthSum: " + strengthSum);
// compute the total probability. this will be 1.00 or close to it.
for (String key : keySet) {
Classifier classifier = ClassifierMap.CLASSIFIER_MAP.get(key);
double probability = (classifier.getStrength() / strengthSum);
probabilitySum = probabilitySum + probability;
}
System.out.println("probabilitySum: " + probabilitySum);
while (classifierList.size() < classifiers) {
boolean winnerFound = false;
double rouletteRandom = random.nextDouble();
double rouletteSum = 0.0;
for (String key : keySet) {
Classifier classifier = ClassifierMap.CLASSIFIER_MAP.get(key);
double probability = (classifier.getStrength() / strengthSum);
rouletteSum = rouletteSum + probability;
if (rouletteSum > rouletteRandom && (winnerFound == false)) {
System.out.println("Winner found: " + probability);
classifierList.add(classifier);
winnerFound = true;
}
}
}
return classifierList;
}
You can use a data structure like this:
Map<A, B> roulette_wheel_schema = new LinkedHashMap<A, B>()
where A is an integer that represents a pocket of the roulette wheel, and B is an index that identifies a chromosome in the population. The number of pockets is proportional to the fitness proportionate of each chromosome:
number of pockets = (fitness proportionate) · (scale factor)
Then we generate a random between 0 and the size of the selection schema and with this random number we get the index of the chromosome from the roulette.
We calculate the relative error between the fitness proportionate of each chromosome and the probability of being selected by the selection scheme.
The method getRouletteWheel returns the selection scheme based on previous data structure.
private Map<Integer, Integer> getRouletteWheel(
ArrayList<Chromosome_fitnessProportionate> chromosomes,
int precision) {
/*
* The number of pockets on the wheel
*
* number of pockets in roulette_wheel_schema = probability ·
* (10^precision)
*/
Map<Integer, Integer> roulette_wheel_schema = new LinkedHashMap<Integer, Integer>();
double fitness_proportionate = 0.0D;
double pockets = 0.0D;
int key_counter = -1;
double scale_factor = Math
.pow(new Double(10.0D), new Double(precision));
for (int index_cromosome = 0; index_cromosome < chromosomes.size(); index_cromosome++){
Chromosome_fitnessProportionate chromosome = chromosomes
.get(index_cromosome);
fitness_proportionate = chromosome.getFitness_proportionate();
fitness_proportionate *= scale_factor;
pockets = Math.rint(fitness_proportionate);
System.out.println("... " + index_cromosome + " : " + pockets);
for (int j = 0; j < pockets; j++) {
roulette_wheel_schema.put(Integer.valueOf(++key_counter),
Integer.valueOf(index_cromosome));
}
}
return roulette_wheel_schema;
}
I have worked out a Java code similar to that of Dan Dyer (referenced earlier). My roulette-wheel, however, selects a single element based on a probability vector (input) and returns the index of the selected element.
Having said that, the following code is more appropriate if the selection size is unitary and if you do not assume how the probabilities are calculated and zero probability value is allowed. The code is self-contained and includes a test with 20 wheel spins (to run).
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Random;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* Roulette-wheel Test version.
* Features a probability vector input with possibly null probability values.
* Appropriate for adaptive operator selection such as Probability Matching
* or Adaptive Pursuit, (Dynamic) Multi-armed Bandit.
* #version October 2015.
* #author Hakim Mitiche
*/
public class RouletteWheel {
/**
* Selects an element probabilistically.
* #param wheelProbabilities elements probability vector.
* #param rng random generator object
* #return selected element index
* #throws java.lang.Exception
*/
public int select(List<Double> wheelProbabilities, Random rng)
throws Exception{
double[] cumulativeProba = new double[wheelProbabilities.size()];
cumulativeProba[0] = wheelProbabilities.get(0);
for (int i = 1; i < wheelProbabilities.size(); i++)
{
double proba = wheelProbabilities.get(i);
cumulativeProba[i] = cumulativeProba[i - 1] + proba;
}
int last = wheelProbabilities.size()-1;
if (cumulativeProba[last] != 1.0)
{
throw new Exception("The probabilities does not sum up to one ("
+ "sum="+cumulativeProba[last]);
}
double r = rng.nextDouble();
int selected = Arrays.binarySearch(cumulativeProba, r);
if (selected < 0)
{
/* Convert negative insertion point to array index.
to find the correct cumulative proba range index.
*/
selected = Math.abs(selected + 1);
}
/* skip indexes of elements with Zero probability,
go backward to matching index*/
int i = selected;
while (wheelProbabilities.get(i) == 0.0){
System.out.print(i+" selected, correction");
i--;
if (i<0) i=last;
}
selected = i;
return selected;
}
public static void main(String[] args){
RouletteWheel rw = new RouletteWheel();
int rept = 20;
List<Double> P = new ArrayList<>(4);
P.add(0.2);
P.add(0.1);
P.add(0.6);
P.add(0.1);
Random rng = new Random();
for (int i = 0 ; i < rept; i++){
try {
int s = rw.select(P, rng);
System.out.println("Element selected "+s+ ", P(s)="+P.get(s));
} catch (Exception ex) {
Logger.getLogger(RouletteWheel.class.getName()).log(Level.SEVERE, null, ex);
}
}
P.clear();
P.add(0.2);
P.add(0.0);
P.add(0.5);
P.add(0.0);
P.add(0.1);
P.add(0.2);
//rng = new Random();
for (int i = 0 ; i < rept; i++){
try {
int s = rw.select(P, rng);
System.out.println("Element selected "+s+ ", P(s)="+P.get(s));
} catch (Exception ex) {
Logger.getLogger(RouletteWheel.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
/**
* {#inheritDoc}
* #return
*/
#Override
public String toString()
{
return "Roulette Wheel Selection";
}
}
Below an execution sample for a proba vector P=[0.2,0.1,0.6,0.1],
WheelElements = [0,1,2,3]:
Element selected 3, P(s)=0.1
Element selected 2, P(s)=0.6
Element selected 3, P(s)=0.1
Element selected 2, P(s)=0.6
Element selected 1, P(s)=0.1
Element selected 2, P(s)=0.6
Element selected 3, P(s)=0.1
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
Element selected 3, P(s)=0.1
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
Element selected 0, P(s)=0.2
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
Element selected 2, P(s)=0.6
The code also tests a roulette wheel with zero probability.
I am afraid that anybody using the in built random number generator in all programming languages must be aware that the number generated is not 100% random.So should be used with caution.
Random Number Generator pseudo code
add one to a sequential counter
get the current value of the sequential counter
add the counter value by the computer tick count or some other small interval timer value
optionally add addition numbers, like a number from an external piece of hardware like a plasma generator or some other type of somewhat random phenomena
divide the result by a very big prime number
359334085968622831041960188598043661065388726959079837 for example
get some digits from the far right of the decimal point of the result
use these digits as a random number
Use the random number digits to create random numbers between 1 and 38 (or 37 European) for roulette.

Resources