How do I combine observable loop results to array in RxSwift? - rx-swift

I have a simple stream that has numbers. I want to do some mathematical operations and then collect results sequentially in an array. How can i do that ?
func test (number : Int) -> Observable<Int> {
let obs2 = Observable<Int>.create { (obs) -> Disposable in
obs.onNext(number + 10)
return Disposables.create()
}
return obs2
}
let obs = Observable.from([1,2,3,4]).flatMap { (item) -> Observable<Int> in
self.test(number: item)
}.map { (result) -> Int in
return result
}
//I want this:
obs.subscribe(onNext : {[Int] in
...
...
}
I can't combine every int to an array.

let arrayObservable = obs.reduce([]) { acc, element in acc + [element] }
Reduce will start with an empty array and will append each element of the stream to the array. It will then emit only one .next event, after the source obs completes, with the result of the accumulation.
Another option would be to use the buffer operator. But keep in mind that the resulting array will only contain up to a certain number of elements and that it'll also emit every timeSpan, even if source did not emit any item.

Related

How can I check if a vector is a subsequence (in the same order but not contiguous) of another vector?

How can I check if all elements of vector_a also appear in the same order as vector_b?
vector_b could be very long, there is no assumption that it is sorted, but it does not have duplicate elements.
I could not find a method implemented for Vec or in itertools, so I tried implementing by doing:
Create a hashmap from vector_b mapping value -> index
Iterate over vector_b and check that:
Element exists in hashmap
Index is strictly greater than previous element's index
I am not really happy with this as it is not space efficient due to the creation of the hashmap.
Search for each element of the needle in the haystack in order. Each time you find a matching element, only continue the search in the remaining portion of the haystack. You can express this nicely by taking a new subslice of of the haystack each time you match an element.
fn is_subsequence<T: PartialEq>(needle: &[T], mut haystack: &[T]) -> bool {
for search in needle {
if let Some(index) = haystack.iter().position(|el| search == el) {
haystack = &haystack[index + 1..];
} else {
return false;
}
}
true
}
assert!(is_subsequence(b"", b"0123456789"));
assert!(is_subsequence(b"0", b"0123456789"));
assert!(is_subsequence(b"059", b"0123456789"));
assert!(is_subsequence(b"345", b"0123456789"));
assert!(is_subsequence(b"0123456789", b"0123456789"));
assert!(!is_subsequence(b"335", b"0123456789"));
assert!(!is_subsequence(b"543", b"0123456789"));
A slice is just a pointer and a size, stored on the stack, so this does no new allocations. It runs in O(n) time and should be close to the fastest possible implementation - or at least in the same ballpark.
Easiest way to do it is to iterate the two vectors jointly:
fn contains<T: PartialEq>(needle: &[T], haystack: &[T]) -> bool {
let mut idx = 0;
for it in needle {
while (idx < haystack.len()) && (&haystack[idx] != it) {
idx += 1;
}
if idx == haystack.len() {
return false;
}
}
return true;
}

Increment Key, Decrement Key, Find Max Key, Find Min key in O(1) time

I was asked this question in the interview but could not solve it. Design a data structure which does the following
Inc(Key) -> Takes a key and increment its value by 1. If the key comes first time then make its value as 1.
Dec(Key) -> Takes a key and decrement its value by 1. It is given that its value is minimum 1.
Findmaxkey() -> Returns the key which has the maximum value corresponding to it. If there are multiple such keys then you can output any of them.
Findminkey() -> Returns the key which has the minimum value corresponding to it. If there are multiple such keys then you can output any of them.
You have to do all the operations in O(1) time.
Hint: The interviewer was asking me to use a dictionary(hashmap) with a doubly-linked list.
The data structure could be constructed as follows:
Store all keys that have the same count in a HashSet keys, and accompany that set with the value for count: let's call this pair of count and keys a "bucket".
For each count value for which there is at least a key, you'd have such a bucket. Put the buckets in a doubly linked list bucketList, and keep them ordered by count.
Also create a HashMap bucketsByKey that maps a key to the bucket where that key is currently stored (the key is listed in the bucket's keys set)
The FindMinKey operation is then simple: get the first bucket from bucketList, grab a key from it's keys set (no matter which), and return it. Similar for FindMaxKey.
The Inc(key) operation would perform the following steps:
Get the bucket corresponding to key from bucketsByKey
If that bucket exists, delete the key from it's keys set.
If that set happens to become empty, remove the bucket from bucketList
If the next bucket in bucketList has a count that is one more, then add the key to it's set, and update bucketsByKey so that it refers to this bucket for this key.
If the next bucket in bucketList has a different count (or there are no more buckets), then create a new bucket with the right count and key and insert it just before the earlier found bucket in bucketList -- or if no next bucket was found, just add the new one at the end.
If in step 2 there was no bucket found for this key, then assume its count was 0, and take the first bucket from bucketList and use it as the "next bucket" from step 4 onwards.
The process for Dec(key) is similar except that when the count is found to be already 1, nothing happens.
Here is an interactive snippet in JavaScript which you can run here. It uses the native Map for the HashMap, the native Set for the HashSet, and implements a doubly linked list as a circular one, where the start/end is marked by a "sentinel" node (without data).
You can press the Inc/Dec buttons for a key of your choice and monitor the output of FindMinKey and FindMaxKey, as well as a simple view on the data structure.
class Bucket {
constructor(count) {
this.keys = new Set; // keys in this hashset all have the same count:
this.count = count; // will never change. It's the unique key identifying this bucket
this.next = this; // next bucket in a doubly linked, circular list
this.prev = this; // previous bucket in the list
}
delete() { // detach this bucket from the list it is in
this.next.prev = this.prev;
this.prev.next = this.next;
this.next = this;
this.prev = this;
}
insertBefore(node) { // inject `this` into the list that `node` is in, right before it
this.next = node;
this.prev = node.prev;
this.prev.next = this;
this.next.prev = this;
}
* nextBuckets() { // iterate all following buckets until the "sentinel" bucket is encountered
for (let bucket = this.next; bucket.count; bucket = bucket.next) {
yield bucket;
}
}
}
class MinMaxMap {
constructor() {
this.bucketsByKey = new Map; // hashmap of key -> bucket
this.bucketList = new Bucket(0); // a sentinel node of a circular doubly linked list of buckets
}
inc(key) {
this.add(key, 1);
}
dec(key) {
this.add(key, -1);
}
add(key, one) {
let nextBucket, count = 1;
let bucket = this.bucketsByKey.get(key);
if (bucket === undefined) {
nextBucket = this.bucketList.next;
} else {
count = bucket.count + one;
if (count < 1) return;
bucket.keys.delete(key);
nextBucket = one === 1 ? bucket.next : bucket.prev;
if (bucket.keys.size === 0) bucket.delete(); // remove from its list
}
if (nextBucket.count !== count) {
bucket = new Bucket(count);
bucket.insertBefore(one === 1 ? nextBucket : nextBucket.next);
} else {
bucket = nextBucket;
}
bucket.keys.add(key);
this.bucketsByKey.set(key, bucket);
}
findMaxKey() {
if (this.bucketList.prev.count === 0) return null; // the list is empty
return this.bucketList.prev.keys.values().next().value; // get any key from first bucket
}
findMinKey() {
if (this.bucketList.next.count === 0) return null; // the list is empty
return this.bucketList.next.keys.values().next().value; // get any key from last bucket
}
toString() {
return JSON.stringify(Array.from(this.bucketList.nextBuckets(), ({count, keys}) => [count, ...keys]))
}
}
// I/O handling
let inpKey = document.querySelector("input");
let [btnInc, btnDec] = document.querySelectorAll("button");
let [outData, outMin, outMax] = document.querySelectorAll("span");
let minMaxMap = new MinMaxMap;
btnInc.addEventListener("click", function () {
minMaxMap.inc(inpKey.value);
refresh();
});
btnDec.addEventListener("click", function () {
minMaxMap.dec(inpKey.value);
refresh();
});
function refresh() {
outData.textContent = minMaxMap.toString();
outMin.textContent = minMaxMap.findMinKey();
outMax.textContent = minMaxMap.findMaxKey();
}
key: <input> <button>Inc</button> <button>Dec</button><br>
data structure (linked list): <span></span><br>
findMinKey = <span></span><br>
findMaxKey = <span></span>
Here is my answer, still I'm not sure that I haven't broken any of the circumstances that your interviewer had in mind.
We will keep a LinkedList where each element has the key and values it's corresponding to, and a pointer to its previous and next element and is always sorted by values. We store a pointer for every key, where it is placed in the LinkedList. Furthermore, for every new number that we see, we add two elements which are supposed to view the start and end element of each number and we will store a pointer to them. Since we are adding these extra elements at most two for each operation, it's still of O(1).
now for every operation (say increment), we can find where the element corresponding to this key is placed in the LinkedList using a dictionary (assuming dictionaries work in time complexity of O(1)) now, we find the last element in the LinkedList which has the same value (we can do it using the element corresponding to the end of that value and come one element backwards) and swap these two's pointers (it's only a simple swap, and this swap does not affect other elements) next we swap this element with it's next one for two times so that it falls in the segment of the next number (we may need to add that number as well), the last things to keep track of, is the value of minimum and maximum which has to be updated if the element which is changing is either the current minimum or maximum and there is no number with the same value (the start and end elements for that value are consecutive in the LinkedList)
Still, I think this approach can be improved.
The key is the problem only asks for dec(1) or inc(1). Therefore, the algorithm only needs to move a block forward or backward. That's a strong prior and gives a lot of information.
My tested code:
template <typename K, uint32_t N>
struct DumbStructure {
private:
const int head_ = 0, tail_ = N - 1;
std::unordered_map<K, int> dic_;
int l_[N], r_[N], min_ = -1, max_ = -1;
std::unordered_set<K> keys_[N];
void NewKey(const K &key) {
if (min_ < 0) {
// nothing on the list
l_[1] = head_;
r_[1] = tail_;
r_[head_] = 1;
l_[tail_] = 1;
min_ = max_ = 1;
} else if (min_ == 1) {
} else {
// min_ > 1
l_[1] = head_;
r_[1] = min_;
r_[head_] = 1;
l_[min_] = 1;
min_ = 1;
}
keys_[1].insert(key);
}
void MoveKey(const K &key, int from_value, int to_value) {
int prev_from_value = l_[from_value];
int succ_from_value = r_[from_value];
if (keys_[from_value].size() >= 2) {
} else {
r_[prev_from_value] = succ_from_value;
l_[succ_from_value] = prev_from_value;
if (min_ == from_value) min_ = succ_from_value;
if (max_ == from_value) max_ = prev_from_value;
}
keys_[from_value].erase(key);
if (keys_[to_value].size() >= 1) {
} else {
if (to_value > from_value) {
// move forward
l_[to_value] =
keys_[from_value].size() > 0 ? from_value : prev_from_value;
r_[to_value] = succ_from_value;
r_[l_[to_value]] = to_value;
l_[r_[to_value]] = to_value;
} else {
// move backward
l_[to_value] = prev_from_value;
r_[to_value] =
keys_[from_value].size() > 0 ? from_value : succ_from_value;
r_[l_[to_value]] = to_value;
l_[r_[to_value]] = to_value;
}
}
keys_[to_value].insert(key);
min_ = std::min(min_, to_value);
max_ = std::max(max_, to_value);
}
public:
DumbStructure() {
l_[head_] = -1;
r_[head_] = tail_;
l_[tail_] = head_;
r_[tail_] = -1;
}
void Inc(const K &key) {
if (dic_.count(key) == 0) {
dic_[key] = 1;
NewKey(key);
} else {
MoveKey(key, dic_[key], dic_[key] + 1);
dic_[key] += 1;
}
}
void Dec(const K &key) {
if (dic_.count(key) == 0 || dic_[key] == 1) {
// invalid
return;
} else {
MoveKey(key, dic_[key], dic_[key] - 1);
dic_[key] -= 1;
}
}
K GetMaxKey() const { return *keys_[max_].begin(); }
K GetMinKey() const { return *keys_[min_].begin(); }
};

Can I randomly sample from a HashSet efficiently?

I have a std::collections::HashSet, and I want to sample and remove a uniformly random element.
Currently, what I'm doing is randomly sampling an index using rand.gen_range, then iterating over the HashSet to that index to get the element. Then I remove the selected element. This works, but it's not efficient. Is there an efficient way to do randomly sample an element?
Here's a stripped down version of what my code looks like:
use std::collections::HashSet;
extern crate rand;
use rand::thread_rng;
use rand::Rng;
let mut hash_set = HashSet::new();
// ... Fill up hash_set ...
let index = thread_rng().gen_range(0, hash_set.len());
let element = hash_set.iter().nth(index).unwrap().clone();
hash_set.remove(&element);
// ... Use element ...
The only data structures allowing uniform sampling in constant time are data structures with constant time index access. HashSet does not provide indexing, so you can’t generate random samples in constant time.
I suggest to convert your hash set to a Vec first, and then sample from the vector. To remove an element, simply move the last element in its place – the order of the elements in the vector is immaterial anyway.
If you want to consume all elements from the set in random order, you can also shuffle the vector once and then iterate over it.
Here is an example implementation for removing a random element from a Vec in constant time:
use rand::{thread_rng, Rng};
pub trait RemoveRandom {
type Item;
fn remove_random<R: Rng>(&mut self, rng: &mut R) -> Option<Self::Item>;
}
impl<T> RemoveRandom for Vec<T> {
type Item = T;
fn remove_random<R: Rng>(&mut self, rng: &mut R) -> Option<Self::Item> {
if self.len() == 0 {
None
} else {
let index = rng.gen_range(0..self.len());
Some(self.swap_remove(index))
}
}
}
(Playground)
Thinking about Sven Marnach's answer, I want to use a vector, but I also need constant time insertion without duplication. Then I realized that I can maintain both a vector and a set, and ensure that they both had the same elements at all times. This will allow both constant time insertion with deduplication and constant time random removal.
Here's the implementation I ended up with:
struct VecSet<T> {
set: HashSet<T>,
vec: Vec<T>,
}
impl<T> VecSet<T>
where
T: Clone + Eq + std::hash::Hash,
{
fn new() -> Self {
Self {
set: HashSet::new(),
vec: Vec::new(),
}
}
fn insert(&mut self, elem: T) {
assert_eq!(self.set.len(), self.vec.len());
let was_new = self.set.insert(elem.clone());
if was_new {
self.vec.push(elem);
}
}
fn remove_random(&mut self) -> T {
assert_eq!(self.set.len(), self.vec.len());
let index = thread_rng().gen_range(0, self.vec.len());
let elem = self.vec.swap_remove(index);
let was_present = self.set.remove(&elem);
assert!(was_present);
elem
}
fn is_empty(&self) -> bool {
assert_eq!(self.set.len(), self.vec.len());
self.vec.is_empty()
}
}
Sven's answer suggests converting the HashSet to a Vec, in order to randomly sample from the Vec in O(1) time. This conversion takes O(n) time and is suitable if the conversion needs to be done only sparingly; e.g., for taking a series of random samples from an otherwise unchanging hashset. It is less suitable if conversions need to be done often, e.g., if, between taking random samples, one wants to intersperse some O(1) removals-by-value from the HashSet, since that would involve converting back and forth between HashSet and Vec, with each conversion taking O(n) time.
isaacg's solution is to keep both a HashSet and a Vec and operate on them in tandem. This allows O(1) lookup by index, O(1) random removal, and O(1) insertion, but not O(1) lookup by value or O(1) removal by value (because the Vec can't do those).
Below, I give a data structure that allows O(1) lookup by index or by value, O(1) insertion, and O(1) removal by index or value:
It is a HashMap<T, usize> together with a Vec<T>, such that the Vec maps indexes (which are usizes) to Ts, while the HashMap maps Ts to usizes. The HashMap and Vec can be thought of as inverse functions of one another, so that you can go from an index to its value, and from a value back to its index. The insertion and deletion operations are defined so that the indexes are precisely the integers from 0 to size()-1, with no gaps allowed. I call this data structure a BijectiveFiniteSequence. (Note the take_random_val method; it works in O(1) time.)
use std::collections::HashMap;
use rand::{thread_rng, Rng};
#[derive(Clone, Debug)]
struct BijectiveFiniteSequence<T: Eq + Copy + Hash> {
idx_to_val: Vec<T>,
val_to_idx: HashMap<T, usize>,
}
impl<T: Eq + Copy + Hash> BijectiveFiniteSequence<T> {
fn new () -> BijectiveFiniteSequence<T> {
BijectiveFiniteSequence {
idx_to_val: Vec::new(),
val_to_idx: HashMap::new()
}
}
fn insert(&mut self, val: T) {
self.idx_to_val.push(val);
self.val_to_idx.insert(val, self.len()-1);
}
fn take_random_val(&mut self) -> Option<T> {
let mut rng = thread_rng();
let rand_idx: usize = rng.gen_range(0..self.len());
self.remove_by_idx(rand_idx)
}
fn remove_by_idx(&mut self, idx: usize) -> Option<T> {
match idx < self.len() {
true => {
let val = self.idx_to_val[idx];
let last_idx = self.len() - 1;
self.idx_to_val.swap(idx, last_idx);
self.idx_to_val.pop();
// update hashmap entry after the swap above
self.val_to_idx.insert(self.idx_to_val[idx], idx);
self.val_to_idx.remove(&val);
Some(val)
},
false => None
}
}
fn remove_val(&mut self, val: T) -> Option<T> {
//nearly identical to the implementation of remove_by_idx,above
match self.contains(&val) {
true => {
let idx: usize = *self.val_to_idx.get(&val).unwrap();
let last_idx = self.len() - 1;
self.idx_to_val.swap(idx, last_idx);
self.idx_to_val.pop();
// update hashmap entry after the swap above
self.val_to_idx.insert(self.idx_to_val[idx], idx);
self.val_to_idx.remove(&val);
Some(val)
}
false => None
}
}
fn get_idx_of(&mut self, val: &T) -> Option<&usize> {
self.val_to_idx.get(val)
}
fn get_val_at(&mut self, idx: usize) -> Option<T> {
match idx < self.len() {
true => Some(self.idx_to_val[idx]),
false => None
}
}
fn contains(&self, val: &T) -> bool {
self.val_to_idx.contains_key(val)
}
fn len(&self) -> usize {
self.idx_to_val.len()
}
// etc. etc. etc.
}
According to the documentation for HashSet::iter it returns "An iterator visiting all elements in arbitrary order."
Arbitrary is perhaps not exactly uniform randomness, but if it's close enough for your use case, this is O(1) and will return different values each time:
// Build a set of integers 0 - 99
let mut set = HashSet::new();
for i in 0..100 {
set.insert(i);
}
// Sample
for _ in 0..10 {
let n = set.iter().next().unwrap().clone();
println!("{}", n);
set.remove(&n);
}
Like the author I wanted to remove the value after sampling from the HashSet. Sampling multiple times this way, without altering the HashSet, seems to yield the same result each time.

How (if possible) to sort a BTreeMap by value in Rust?

I am following a course on Software Security for which one of the assignments is to write some basic programs in Rust. For one of these assignments I need to analyze a text-file and generate several statistics. One of these is a generated list of the ten most used words in the text.
I have written this program that performs all tasks in the assignment except for the word frequency statistic mentioned above, the program compiles and executes the way I expect:
extern crate regex;
use std::error::Error;
use std::fs::File;
use std::io::prelude::*;
use std::path::Path;
use std::io::BufReader;
use std::collections::BTreeMap;
use regex::Regex;
fn main() {
// Create a path to the desired file
let path = Path::new("text.txt");
let display = path.display();
let file = match File::open(&path) {
Err(why) => panic!("couldn't open {}: {}", display,
why.description()),
Ok(file) => file,
};
let mut wordcount = 0;
let mut averagesize = 0;
let mut wordsize = BTreeMap::new();
let mut words = BTreeMap::new();
for line in (BufReader::new(file)).lines() {
let re = Regex::new(r"([A-Za-z]+[-_]*[A-Za-z]+)+").unwrap();
for cap in re.captures_iter(&line.unwrap()) {
let word = cap.at(1).unwrap_or("");
let lower = word.to_lowercase();
let s = lower.len();
wordcount += 1;
averagesize += s;
*words.entry(lower).or_insert(0) += 1;
*wordsize.entry(s).or_insert(0) += 1;
}
}
averagesize = averagesize / wordcount;
println!("This file contains {} words with an average of {} letters per word.", wordcount, averagesize);
println!("\nThe number of times a word of a certain length was found.");
for (size, count) in wordsize.iter() {
println!("There are {} words of size {}.", count, size);
}
println!("\nThe ten most used words.");
let mut popwords = BTreeMap::new();
for (word, count) in words.iter() {
if !popwords.contains_key(count) {
popwords.insert(count, "");
}
let newstring = format!("{} {}", popwords.get(count), word);
let mut e = popwords.get_mut(count);
}
let mut i = 0;
for (count, words) in popwords.iter() {
i += 1;
if i > 10 {
break;
}
println!("{} times: {}", count, words);
}
}
I have a BTreeMap (that I chose with these instructions), words, that stores each word as key and its associated frequency in the text as value. This functionality works as I expect, but there I am stuck. I have been trying to find ways to sort the BTreemap by value or find another data structure in Rust that is natively sorted by value.
I am looking for the correct way to achieve this data structure (a list of words with their frequency, sorted by frequency) in Rust. Any pointers are greatly appreciated!
If you only need to analyze a static dataset, the easiest way is to just convert your BTreeMap into a Vec<T> in the end and sort the latter (Playground):
use std::iter::FromIterator;
let mut v = Vec::from_iter(map);
v.sort_by(|&(_, a), &(_, b)| b.cmp(&a));
The vector contains the (key, value) pairs as tuple. To sort the vector, we have to use sort_by() or sort_by_key(). To sort the vector in decreasing order, I used b.cmp(&a) (as opposed to a.cmp(&b), which would be the natural order). But there are other possibilities to reverse the order of a sort.
However, if you really need some data structure such that you have a streaming calculation, it's getting more complicated. There are many possibilities in that case, but I guess using some kind of priority queue could work out.

Find overlap of "circular" ranges

By circular I mean a range can cross the max-value and loop back starting at 0. For example:
Given a max-value:
9
And two ranges (both of which can be "circular"):
0123456789
----range1 (a normal range)
ge2----ran (a circular range)
What is a good algorithm to calculate their intersection(s)?
In this case the intersection(s) would be:
7-9
789
ge1
ran
Bonus for an algorithm that can "delete" one from the other.
By delete I mean, one range is being completely extracted from another:
0123456789
----range1
ge2----ran
subtracting range2 from range 1 would yield:
3456
-ran
Update: the numbers are always integers. There are only ever two ranges being compared at once and they are always contiguous though, as noted, they may span 0.
Also note, it would be nice to output a boolean if one range fully contained another. I think I may have though of a nice way to do so.
Thanks!
It looks as though you can simply take each discrete element of your range and put it in a set. You can then perform an intersection of sets to get the output element.
This can be done in O(M+N) time by using a hash table.
Walk through your first range, creating an entry in the hash table for each element which is a member of the range.
Then walk through the second range and look each element up. If it is already in the hash table, then it is part of the intersection of the ranges.
With a little thought, you'll figure out how set differencing works.
If you need to intersect a third range, remove elements from the table that were not part of the second range.
here's an update about how I went about solving my above question. Basically the strategy is to divide and conquer.
Both ranges are split into two separate sections if need be. Then they are compared to each one at a time.
Hope this helps someone out, and tell me if you see any logical errors in this strategy. Later I'll post the "deletion" algorithm I mentioned above.
Also note, the ranges are 0-based.
var arePositiveIntegers = require('./arePositiveIntegers');
//returns an array of the overlaps between two potentially circular ranges
module.exports = function getOverlapsOfPotentiallyCircularRanges(rangeA, rangeB, maxLength) {
if (!arePositiveIntegers(rangeA.start, rangeA.end, rangeB.start, rangeB.end)) {
console.warn("unable to calculate ranges of inputs");
return [];
}
var normalizedRangeA = splitRangeIntoTwoPartsIfItIsCircular(rangeA, maxLength);
var normalizedRangeB = splitRangeIntoTwoPartsIfItIsCircular(rangeB, maxLength);
var overlaps = [];
normalizedRangeA.forEach(function(nonCircularRangeA) {
normalizedRangeB.forEach(function(nonCircularRangeB) {
var overlap = getOverlapOfNonCircularRanges(nonCircularRangeA, nonCircularRangeB);
if (overlap) {
overlaps.push(overlap);
}
});
});
return overlaps;
};
//takes a potentially circular range and returns an array containing the range split on the origin
function splitRangeIntoTwoPartsIfItIsCircular(range, maxLength) {
if (range.start <= range.end) {
//the range isn't circular, so we just return the range
return [{
start: range.start,
end: range.end
}];
} else {
//the range is cicular, so we return an array of two ranges
return [{
start: 0,
end: range.end
}, {
start: range.start,
end: maxLength - 1
}];
}
}
function getOverlapOfNonCircularRanges(rangeA, rangeB) {
if (!arePositiveIntegers(rangeA.start, rangeA.end, rangeB.start, rangeB.end)) {
console.warn("unable to calculate ranges of inputs");
return null;
}
if (rangeA.start < rangeB.start) {
if (rangeA.end < rangeB.start) {
//no overlap
} else {
if (rangeA.end < rangeB.end) {
return {
start: rangeB.start,
end: rangeA.end
};
} else {
return {
start: rangeB.start,
end: rangeB.end
};
}
}
} else {
if (rangeA.start > rangeB.end) {
//no overlap
} else {
if (rangeA.end < rangeB.end) {
return {
start: rangeA.start,
end: rangeA.end
};
} else {
return {
start: rangeA.start,
end: rangeB.end
};
}
}
}
}

Resources