Data structure: insert, remove, contains, get random element, all at O(1) - data-structures

I was given this problem in an interview. How would you have answered?
Design a data structure that offers the following operations in O(1) time:
insert
remove
contains
get random element

Consider a data structure composed of a hashtable H and an array A. The hashtable keys are the elements in the data structure, and the values are their positions in the array.
insert(value): append the value to array and let i be its index in A. Set H[value]=i.
remove(value): We are going to replace the cell that contains value in A with the last element in A. let d be the last element in the array A at index m. let i be H[value], the index in the array of the value to be removed. Set A[i]=d, H[d]=i, decrease the size of the array by one, and remove value from H.
contains(value): return H.contains(value)
getRandomElement(): let r=random(current size of A). return A[r].
since the array needs to auto-increase in size, it's going to be amortize O(1) to add an element, but I guess that's OK.

O(1) lookup hints at a hashed data structure.
By comparison:
O(1) insert/delete with O(N) lookup implies a linked list.
O(1) insert, O(N) delete, and O(N) lookup implies an array-backed list
O(logN) insert/delete/lookup implies a tree or heap.

For this Question i will use two Data Structure
HashMap
ArrayList / Array / Double LinkedList.
Steps :-
Insertion :- Check if X is already present in HashMap --Time complexity O(1) . if not Present Then Add in end of ArrayList -- Time complexity O(1).
add it in HashMap also x as key and last Index as a value -- Time complexity O(1).
Remove :- Check if X is present in HashMap --Time complexity O(1). If present then find the its index and remove it from HashMap --Time complexity O(1). swap this element with last element in ArrayList and remove the last element --Time complexity O(1). Update the index of last Element in HashMap --Time complexity O(1).
GetRandom :- Generate Random number from 0 to last index of ArrayList . return the ArrayList element at random index generated --Time complexity O(1).
Search :- See in HashMap for x as a key. --Time complexity O(1).
Code :-
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Random;
import java.util.Scanner;
public class JavaApplication1 {
public static void main(String args[]){
Scanner sc = new Scanner(System.in);
ArrayList<Integer> al =new ArrayList<Integer>();
HashMap<Integer,Integer> mp = new HashMap<Integer,Integer>();
while(true){
System.out.println("**menu**");
System.out.println("1.insert");
System.out.println("2.remove");
System.out.println("3.search");
System.out.println("4.rendom");
int ch = sc.nextInt();
switch(ch){
case 1 : System.out.println("Enter the Element ");
int a = sc.nextInt();
if(mp.containsKey(a)){
System.out.println("Element is already present ");
}
else{
al.add(a);
mp.put(a, al.size()-1);
}
break;
case 2 : System.out.println("Enter the Element Which u want to remove");
a = sc.nextInt();
if(mp.containsKey(a)){
int size = al.size();
int index = mp.get(a);
int last = al.get(size-1);
Collections.swap(al, index, size-1);
al.remove(size-1);
mp.put(last, index);
System.out.println("Data Deleted");
}
else{
System.out.println("Data Not found");
}
break;
case 3 : System.out.println("Enter the Element to Search");
a = sc.nextInt();
if(mp.containsKey(a)){
System.out.println(mp.get(a));
}
else{
System.out.println("Data Not Found");
}
break;
case 4 : Random rm = new Random();
int index = rm.nextInt(al.size());
System.out.println(al.get(index));
break;
}
}
}
}
-- Time complexity O(1).
-- Space complexity O(N).

You might not like this, because they're probably looking for a clever solution, but sometimes it pays to stick to your guns... A hash table already satisfies the requirements - probably better overall than anything else will (albeit obviously in amortised constant time, and with different compromises to other solutions).
The requirement that's tricky is the "random element" selection: in a hash table, you would need to scan or probe for such an element.
For closed hashing / open addressing, the chance of any given bucket being occupied is size() / capacity(), but crucially this is typically kept in a constant multiplicative range by a hash-table implementation (e.g. the table may be kept larger than its current contents by say 1.2x to ~10x depending on performance/memory tuning). This means on average we can expect to search 1.2 to 10 buckets - totally independent of the total size of the container; amortised O(1).
I can imagine two simple approaches (and a great many more fiddly ones):
search linearly from a random bucket
consider empty/value-holding buckets ala "--AC-----B--D": you can say that the first "random" selection is fair even though it favours B, because B had no more probability of being favoured than the other elements, but if you're doing repeated "random" selections using the same values then clearly having B repeatedly favoured may be undesirable (nothing in the question demands even probabilities though)
try random buckets repeatedly until you find a populated one
"only" capacity() / size() average buckets visited (as above) - but in practical terms more expensive because random number generation is relatively expensive, and infinitely bad if infinitely improbable worst-case behaviour...
a faster compromise would be to use a list of pre-generated random offsets from the initial randomly selected bucket, %-ing them into the bucket count
Not a great solution, but may still be a better overall compromise than the memory and performance overheads of maintaining a second index array at all times.

The best solution is probably the hash table + array, it's real fast and deterministic.
But the lowest rated answer (just use a hash table!) is actually great too!
hash table with re-hashing, or new bucket selection (i.e. one element per bucket, no linked lists)
getRandom() repeatedly tries to pick a random bucket until it's empty.
as a fail-safe, maybe getRandom(), after N (number of elements) unsuccessful tries, picks a random index i in [0, N-1] and then goes through the hash table linearly and picks the #i-th element.
People might not like this because of "possible infinite loops", and I've seen very smart people have this reaction too, but it's wrong! Infinitely unlikely events just don't happen.
Assuming the good behavior of your pseudo-random source -- which is not hard to establish for this particular behavior -- and that hash tables are always at least 20% full, it's easy to see that:
It will never happen that getRandom() has to try more than 1000 times. Just never. Indeed, the probability of such an event is 0.8^1000, which is 10^-97 -- so we'd have to repeat it 10^88 times to have one chance in a billion of it ever happening once. Even if this program was running full-time on all computers of humankind until the Sun dies, this will never happen.

Here is a C# solution to that problem I came up with a little while back when asked the same question. It implements Add, Remove, Contains, and Random along with other standard .NET interfaces. Not that you would ever need to implement it in such detail during an interview but it's nice to have a concrete solution to look at...
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
/// <summary>
/// This class represents an unordered bag of items with the
/// the capability to get a random item. All operations are O(1).
/// </summary>
/// <typeparam name="T">The type of the item.</typeparam>
public class Bag<T> : ICollection<T>, IEnumerable<T>, ICollection, IEnumerable
{
private Dictionary<T, int> index;
private List<T> items;
private Random rand;
private object syncRoot;
/// <summary>
/// Initializes a new instance of the <see cref="Bag<T>"/> class.
/// </summary>
public Bag()
: this(0)
{
}
/// <summary>
/// Initializes a new instance of the <see cref="Bag<T>"/> class.
/// </summary>
/// <param name="capacity">The capacity.</param>
public Bag(int capacity)
{
this.index = new Dictionary<T, int>(capacity);
this.items = new List<T>(capacity);
}
/// <summary>
/// Initializes a new instance of the <see cref="Bag<T>"/> class.
/// </summary>
/// <param name="collection">The collection.</param>
public Bag(IEnumerable<T> collection)
{
this.items = new List<T>(collection);
this.index = this.items
.Select((value, index) => new { value, index })
.ToDictionary(pair => pair.value, pair => pair.index);
}
/// <summary>
/// Get random item from bag.
/// </summary>
/// <returns>Random item from bag.</returns>
/// <exception cref="System.InvalidOperationException">
/// The bag is empty.
/// </exception>
public T Random()
{
if (this.items.Count == 0)
{
throw new InvalidOperationException();
}
if (this.rand == null)
{
this.rand = new Random();
}
int randomIndex = this.rand.Next(0, this.items.Count);
return this.items[randomIndex];
}
/// <summary>
/// Adds the specified item.
/// </summary>
/// <param name="item">The item.</param>
public void Add(T item)
{
this.index.Add(item, this.items.Count);
this.items.Add(item);
}
/// <summary>
/// Removes the specified item.
/// </summary>
/// <param name="item">The item.</param>
/// <returns></returns>
public bool Remove(T item)
{
// Replace index of value to remove with last item in values list
int keyIndex = this.index[item];
T lastItem = this.items[this.items.Count - 1];
this.items[keyIndex] = lastItem;
// Update index in dictionary for last item that was just moved
this.index[lastItem] = keyIndex;
// Remove old value
this.index.Remove(item);
this.items.RemoveAt(this.items.Count - 1);
return true;
}
/// <inheritdoc />
public bool Contains(T item)
{
return this.index.ContainsKey(item);
}
/// <inheritdoc />
public void Clear()
{
this.index.Clear();
this.items.Clear();
}
/// <inheritdoc />
public int Count
{
get { return this.items.Count; }
}
/// <inheritdoc />
public void CopyTo(T[] array, int arrayIndex)
{
this.items.CopyTo(array, arrayIndex);
}
/// <inheritdoc />
public bool IsReadOnly
{
get { return false; }
}
/// <inheritdoc />
public IEnumerator<T> GetEnumerator()
{
foreach (var value in this.items)
{
yield return value;
}
}
/// <inheritdoc />
IEnumerator IEnumerable.GetEnumerator()
{
return this.GetEnumerator();
}
/// <inheritdoc />
public void CopyTo(Array array, int index)
{
this.CopyTo(array as T[], index);
}
/// <inheritdoc />
public bool IsSynchronized
{
get { return false; }
}
/// <inheritdoc />
public object SyncRoot
{
get
{
if (this.syncRoot == null)
{
Interlocked.CompareExchange<object>(
ref this.syncRoot,
new object(),
null);
}
return this.syncRoot;
}
}
}

We can use hashing to support operations in Θ(1) time.
insert(x)
1) Check if x is already present by doing a hash map lookup.
2) If not present, then insert it at the end of the array.
3) Add in hash table also, x is added as key and last array index as index.
remove(x)
1) Check if x is present by doing a hash map lookup.
2) If present, then find its index and remove it from hash map.
3) Swap the last element with this element in array and remove the last element.
Swapping is done because the last element can be removed in O(1) time.
4) Update index of last element in hash map.
getRandom()
1) Generate a random number from 0 to last index.
2) Return the array element at the randomly generated index.
search(x)
Do a lookup for x in hash map.

Though this is way old, but since there's no answer in C++, here's my two cents.
#include <vector>
#include <unordered_map>
#include <stdlib.h>
template <typename T> class bucket{
int size;
std::vector<T> v;
std::unordered_map<T, int> m;
public:
bucket(){
size = 0;
std::vector<T>* v = new std::vector<T>();
std::unordered_map<T, int>* m = new std::unordered_map<T, int>();
}
void insert(const T& item){
//prevent insertion of duplicates
if(m.find(item) != m.end()){
exit(-1);
}
v.push_back(item);
m.emplace(item, size);
size++;
}
void remove(const T& item){
//exits if the item is not present in the list
if(m[item] == -1){
exit(-1);
}else if(m.find(item) == m.end()){
exit(-1);
}
int idx = m[item];
m[v.back()] = idx;
T itm = v[idx];
v.insert(v.begin()+idx, v.back());
v.erase(v.begin()+idx+1);
v.insert(v.begin()+size, itm);
v.erase(v.begin()+size);
m[item] = -1;
v.pop_back();
size--;
}
T& getRandom(){
int idx = rand()%size;
return v[idx];
}
bool lookup(const T& item){
if(m.find(item) == m.end()) return false;
return true;
}
//method to check that remove has worked
void print(){
for(auto it = v.begin(); it != v.end(); it++){
std::cout<<*it<<" ";
}
}
};
Here's a piece of client code to test the solution.
int main() {
bucket<char>* b = new bucket<char>();
b->insert('d');
b->insert('k');
b->insert('l');
b->insert('h');
b->insert('j');
b->insert('z');
b->insert('p');
std::cout<<b->random()<<std::endl;
b->print();
std::cout<<std::endl;
b->remove('h');
b->print();
return 0;
}

In C# 3.0 + .NET Framework 4, a generic Dictionary<TKey,TValue> is even better than a Hashtable because you can use the System.Linq extension method ElementAt() to index into the underlying dynamic array where the KeyValuePair<TKey,TValue> elements are stored :
using System.Linq;
Random _generator = new Random((int)DateTime.Now.Ticks);
Dictionary<string,object> _elements = new Dictionary<string,object>();
....
Public object GetRandom()
{
return _elements.ElementAt(_generator.Next(_elements.Count)).Value;
}
However, as far as I know, a Hashtable (or its Dictionary progeny) is not a real solution to this problem because Put() can only be amortized O(1) , not true O(1) , because it is O(N) at the dynamic resize boundary.
Is there a real solution to this problem ? All I can think of is if you specify a Dictionary/Hashtable initial capacity an order of magnitude beyond what you anticipate ever needing, then you get O(1) operations because you never need to resize.

I agree with Anon. Except for the last requirement where getting a random element with equal fairness is required all other requirements can be addressed only using a single Hash based DS. I will choose HashSet for this in Java. The modulo of hash code of an element will give me the index no of the underlying array in O(1) time. I can use that for add, remove and contains operations.

Cant we do this using HashSet of Java? It provides insert, del, search all in O(1) by default.
For getRandom we can make use of iterator of Set which anyways gives random behavior. We can just iterate first element from set without worrying about rest of the elements
public void getRandom(){
Iterator<integer> sitr = s.iterator();
Integer x = sitr.next();
return x;
}

/* Java program to design a data structure that support folloiwng operations
in Theta(n) time
a) Insert
b) Delete
c) Search
d) getRandom */
import java.util.*;
// class to represent the required data structure
class MyDS
{
ArrayList<Integer> arr; // A resizable array
// A hash where keys are array elements and vlaues are
// indexes in arr[]
HashMap<Integer, Integer> hash;
// Constructor (creates arr[] and hash)
public MyDS()
{
arr = new ArrayList<Integer>();
hash = new HashMap<Integer, Integer>();
}
// A Theta(1) function to add an element to MyDS
// data structure
void add(int x)
{
// If ekement is already present, then noting to do
if (hash.get(x) != null)
return;
// Else put element at the end of arr[]
int s = arr.size();
arr.add(x);
// And put in hash also
hash.put(x, s);
}
// A Theta(1) function to remove an element from MyDS
// data structure
void remove(int x)
{
// Check if element is present
Integer index = hash.get(x);
if (index == null)
return;
// If present, then remove element from hash
hash.remove(x);
// Swap element with last element so that remove from
// arr[] can be done in O(1) time
int size = arr.size();
Integer last = arr.get(size-1);
Collections.swap(arr, index, size-1);
// Remove last element (This is O(1))
arr.remove(size-1);
// Update hash table for new index of last element
hash.put(last, index);
}
// Returns a random element from MyDS
int getRandom()
{
// Find a random index from 0 to size - 1
Random rand = new Random(); // Choose a different seed
int index = rand.nextInt(arr.size());
// Return element at randomly picked index
return arr.get(index);
}
// Returns index of element if element is present, otherwise null
Integer search(int x)
{
return hash.get(x);
}
}
// Driver class
class Main
{
public static void main (String[] args)
{
MyDS ds = new MyDS();
ds.add(10);
ds.add(20);
ds.add(30);
ds.add(40);
System.out.println(ds.search(30));
ds.remove(20);
ds.add(50);
System.out.println(ds.search(50));
System.out.println(ds.getRandom());`enter code here`
}
}

This solution properly handles duplicate values. You can:
Insert the same element multiple times
Remove a single instances of an element
To make this possible, we just need to keep a hash-set of indexes for each element.
class RandomCollection:
def __init__(self):
self.map = {}
self.list = []
def get_random_element(self):
return random.choice(self.list)
def insert(self, element):
index = len(self.list)
self.list.append(element)
if element not in self.map:
self.map[element] = set()
self.map[element].add(index)
def remove(self, element):
if element not in self.map:
raise Exception("Element not found", element)
# pop any index in constant time
index = self.map[element].pop()
# find last element
last_index = len(self.list) - 1
last_element = self.list[last_index]
# keep map updated, this also works when removing
# the last element because add() does nothing
self.map[last_element].add(index)
self.map[last_element].remove(last_index)
if len(self.map[element]) == 0:
del self.map[element]
# copy last element to index and delete last element
self.list[index] = self.list[last_index]
del self.list[last_index]
# Example usage:
c = RandomCollection()
times = 1_000_000
for i in range(times):
c.insert("a")
c.insert("b")
for i in range(times - 1):
c.remove("a")
for i in range(times):
c.remove("b")
print(c.list) # prints ['a']

Why don't we use epoch%arraysize to find random element. Finding array size is O(n) but amortized complexity will be O(1).

I think we can use doubly link list with hash table. key will be element and its associated value will be node in doubly linklist.
insert(H,E) : insert node in doubly linklist and make entry as H[E]=node; O(1)
delete(H,E) : get node address by H(E), goto previous of this node and delete and make H(E) as NULL, so O(1)
contains(H,E) and getRandom(H) are obviuosly O(1)

Related

Last remaining number

I was asked this question in an interview.
Given an array 'arr' of positive integers and a starting index 'k' of the array. Delete element at k and jump arr[k] steps in the array in circular fashion. Do this repeatedly until only one element remain. Find the last remaining element.
I thought of O(nlogn) solution using ordered map. Is any O(n) solution possible?
My guess is that there is not an O(n) solution to this problem based on the fact that it seems to involve doing something that is impossible. The obvious thing you would need to solve this problem in linear time is a data structure like an array that exposes two operations on an ordered collection of values:
O(1) order-preserving deletes from the data structure.
O(1) lookups of the nth undeleted item in the data structure.
However, such a data structure has been formally proven to not exist; see "Optimal Algorithms for List Indexing and Subset Rank" and its citations. It is not a proof to say that if the natural way to solve some problem involves using a data structure that is impossible, the problem itself is probably impossible, but such an intuition is often correct.
Anyway there are lots of ways to do this in O(n log n). Below is an implementation of maintaining a tree of undeleted ranges in the array. GetIndex() below returns an index into the original array given a zero-based index into the array if items had been deleted from it. Such a tree is not self-balancing so will have O(n) operations in the worst case but in the average case Delete and GetIndex will be O(log n).
namespace CircleGame
{
class Program
{
class ArrayDeletes
{
private class UndeletedRange
{
private int _size;
private int _index;
private UndeletedRange _left;
private UndeletedRange _right;
public UndeletedRange(int i, int sz)
{
_index = i;
_size = sz;
}
public bool IsLeaf()
{
return _left == null && _right == null;
}
public int Size()
{
return _size;
}
public void Delete(int i)
{
if (i >= _size)
throw new IndexOutOfRangeException();
if (! IsLeaf())
{
int left_range = _left._size;
if (i < left_range)
_left.Delete(i);
else
_right.Delete(i - left_range);
_size--;
return;
}
if (i == _size - 1)
{
_size--; // Can delete the last item in a range by decremnting its size
return;
}
if (i == 0) // Can delete the first item in a range by incrementing the index
{
_index++;
_size--;
return;
}
_left = new UndeletedRange(_index, i);
int right_index = i + 1;
_right = new UndeletedRange(_index + right_index, _size - right_index);
_size--;
_index = -1; // the index field of a non-leaf is no longer necessarily valid.
}
public int GetIndex(int i)
{
if (i >= _size)
throw new IndexOutOfRangeException();
if (IsLeaf())
return _index + i;
int left_range = _left._size;
if (i < left_range)
return _left.GetIndex(i);
else
return _right.GetIndex(i - left_range);
}
}
private UndeletedRange _root;
public ArrayDeletes(int n)
{
_root = new UndeletedRange(0, n);
}
public void Delete(int i)
{
_root.Delete(i);
}
public int GetIndex(int indexRelativeToDeletes )
{
return _root.GetIndex(indexRelativeToDeletes);
}
public int Size()
{
return _root.Size();
}
}
static int CircleGame( int[] array, int k )
{
var ary_deletes = new ArrayDeletes(array.Length);
while (ary_deletes.Size() > 1)
{
int next_step = array[ary_deletes.GetIndex(k)];
ary_deletes.Delete(k);
k = (k + next_step - 1) % ary_deletes.Size();
}
return array[ary_deletes.GetIndex(0)];
}
static void Main(string[] args)
{
var array = new int[] { 5,4,3,2,1 };
int last_remaining = CircleGame(array, 2); // third element, this call is zero-based...
}
}
}
Also note that if the values in the array are known to be bounded such that they are always less than some m less than n, there are lots of O(nm) algorithms -- for example, just using a circular linked list.
I couldn't think of an O(n) solution. However, we could have O(n log n) average time by using a treap or an augmented BST with a value in each node for the size of its subtree. The treap enables us to find and remove the kth entry in O(log n) average time.
For example, A = [1, 2, 3, 4] and k = 3 (as Sumit reminded me in the comments, use the array indexes as values in the tree since those are ordered):
2(0.9)
/ \
1(0.81) 4(0.82)
/
3(0.76)
Find and remove 3rd element. Start at 2 with size = 2 (including the left subtree). Go right. Left subtree is size 1, which together makes 3, so we found the 3rd element. Remove:
2(0.9)
/ \
1(0.81) 4(0.82)
Now we're starting on the third element in an array with n - 1 = 3 elements and looking for the 3rd element from there. We'll use zero-indexing to correlate with our modular arithmetic, so the third element in modulus 3 would be 2 and 2 + 3 = 5 mod 3 = 2, the second element. We find it immediately since the root with its left subtree is size 2. Remove:
4(0.82)
/
1(0.81)
Now we're starting on the second element in modulus 2, so 1, and we're adding 2. 3 mod 2 is 1. Removing the first element we are left with 4 as the last element.

How to validate if a B-tree is sorted

I just had this as an interview question and was wondering if anyone knows the answer?
Write a method that validates whether a B-tree is correctly sorted. You do NOT need to validate whether
the tree is balanced. Use the following model for a node in the B-tree.
It was to be done in Java and use this model:
class Node {
List<Integer> keys;
List<Node> children;
}
One (space-inefficient but simple) way to do this is to do a generalized inorder traversal of the B-tree to get back the keys in what should be sorted order, then to check whether that sequence actually is in sorted order. Here's some quick code for this:
public static boolean isSorted(Node root) {
ArrayList<Integer> values = new ArrayList<Integer>();
performInorderTraversal(root, values);
return isArraySorted(values);
}
private static void performInorderTraversal(Node root, ArrayList<Integer> result) {
/* An empty tree has no values. */
if (result == null) return;
/* Process the first tree here, then loop, processing the interleaved
* keys and trees.
*/
performInorderTraversal(root.children.get(0), result);
for (int i = 1; i < root.children.size(); i++) {
result.add(root.children.get(i - 1));
performInorderTraversal(root.children.get(i), result);
}
}
private static boolean isArraySorted(ArrayList<Integer> array) {
for (int i = 0; i < array.size() - 1; i++) {
if (array.get(i) >= array.get(i + 1)) return false;
}
return true;
}
This takes time O(n) and uses space O(n), where n is the number of elements in the B-tree. You can cut the space usage down to O(h), where h is the height of the B-tree, by not storing all the elements in the traversal and instead just tracking the very last one, stopping the search early if the next-encountered value is not larger than the previous one. I didn't do that here because it takes more code, but conceptually it's not too hard.
Hope this helps!

Find the minimum length snippet from the paragraph that contains all the words belonging to a given query in O(n) time

Given a review paragraph and keywords, find minimum length snippet from paragraph which contains all keywords in any order.If there are millions of review, what preprocessing step would you do.
The first part is simple, just the minimum window problem. Now, for preprocessing, I use inverted index. So, for each review I build a table storing the list of occurance of each word. Now, when a query comes, I retrieve the list of indices for each word. Now, is there some way to find out the min window length from this set of list in O(n) time? I tried building min and max heap to store the current index of each list and then keeping a track of the min window length(using the root of both the heaps). Then I perform extractMin operation and remove the same element from the max heap as well. To keep address of the location of each element in the max heap(for removal), I maintain a hash table. Now from the list, to which the extracted element belonged, I insert the next element into both the heaps and change the window length, if needed. This takes O(nlog n) time. Is it possible to do this in O(n) time?
Assuming this combination is sorted here is how I would do it:
Create a list of objects that describe the word and its index, Something like Obj(String name,Int index).
Init a set containing all keywords of the query.
Init the lower bound of the window as the index of the first element in the list.
Go through the list updating the upper bound of the window as the current object's index, updating the lower bound of the window as the index of the first occurrence of any of the words in your query (i.e. once min_window is set to the index of an actual word occurrence it is no longer updated) and by removing the corresponding word from the set of keywords.
When the set is empty, save the resulting lower and upper bound along with the length of the snippet.
Repeat the steps 2 to 5 but this time the list you're going to use is the list that starts at the element that comes right after the one defined by the previous min_window and by only keeping the min_window and max_window if the length of the snippet is shorter than the previous one (this should be repeated until you can no longer find all occurrences in the given sublist).
#include<bits/stdc++.h>
using namespace std;
map<string,int>word;
void functionlower(string& str){
transform(str.begin(),str.end(),str.begin(),::tolower);
}
string compareWord(string& str){
string temp;
temp.resize(str.size());
transform(str.begin(),str.end(),temp.begin(),::tolower);
return temp;
}
int main(){
int total_word;
cin>>total_word;
for(int i=0;i<total_word;i++){
string str;
cin>>str;
functionlower(str);
word.insert({str,0});
}
cin.ignore();
string str;
vector<string>para;
getline(cin,str);
int index=0;
for(int i=0;i<=str.size();i++){
if(i==str.size()||str[i]==' '){para.push_back(str.substr(index,i-index)); index=i+1;}
}
int currlen=0;
int currpos=0;
int lenprint=0;
int olen=-1;
int opos=-1;
for(int i=0;i<para.size();i++){
string search=compareWord(para[i]);
if(word.find(search)!=word.end()){
if(word[search]==0)currlen++;
word[search]++;
}
while(currlen>=word.size()){
search=compareWord(para[currpos]);
if((i-currpos)<olen||olen==-1){
olen=i-currpos;
opos=currpos;
}
if(word.find(search)!=word.end()){
if(word[search]==1)break;
word[search]--;
currpos++;
lenprint=i;
}else currpos++;
}
}
for(int i=0;i<=olen;i++){
cout<<para[opos+i]<<" ";
}
cout<<endl;
return 0;
}
O(nlogk) where k is number of words need to search
Assuming a constant wordLength, this solution can be achieved in O(n) time complexity, where n is number of words in para; here is the code for implementation in java:
package Basic.MinSnippetWithAllKeywords;
import java.util.*;
/**
* Given a review paragraph and keywords,
* find minimum length snippet from paragraph which contains all keywords in any order.
*/
public class Solution {
public String minSnippet(String para, Set<String> keywords) {
LinkedList<Integer> deque = new LinkedList<>();
String[] words = para.split("\\s");
for (int i = 0; i < words.length; ++i) {
if(keywords.contains(words[i]))
deque.offer(i);
}
while(deque.size() > 1) {
int first = deque.pollFirst();
int second = deque.peekFirst();
if (words[first] != words[second]) {
deque.offerFirst(first);
break;
}
}
while(deque.size() > 1) {
int first = deque.pollLast();
int second = deque.peekLast();
if(words[first] != words[second]) {
deque.offerLast(first);
break;
}
}
if (deque.isEmpty())
return "";
return String.join(" ",
Arrays.copyOfRange(words, deque.peekFirst(), deque.peekLast() + 1));
}
/*
Example:
my name is shubham mishra
is name
*/
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
String para = sc.nextLine();
String keyLine = sc.nextLine();
Set<String> keywords = new HashSet<>();
keywords.addAll(Arrays.asList(keyLine.split("\\s")));
System.out.println(new Solution().minSnippet(para, keywords));
}
}

Delete duplicate integer from an integer array - psuedo code

I need some help in getting this right,
problem
Write a function which takes 2 arrays- One array is the source array and the other array is the array of indices and delete all those elements present at the indices of the source array taking the indices from the second array.
This is what I have come up with....
public static int[] DeleteArrayUsingIndices(int[] source, int[] indices)
{
for (int i = 0; i < indices.Length; i++)
{
if (indices[i] < source.Length)
{
source[indices[i]] = int.MinValue; // delete
}
}
return source;
}
I am not very sure with this solution, as this does not remove the value. Can anyone help me out with this.
You cannot really delete elements from an array, so you need to ask what is meant by this wording. If replacing the elements with an exceptional element (like int.MinValue in your code) is acceptable, your solution is fine.
Another interpretation could be to rearrange the array so the "not deleted" indexes are at the begining of the array in the same order they were in the original -- in this case you would want to return the new "length" of the array (the number of elements that were not "deleted") -- this means that a "delete" operation will compact the array of not-yet deleted elements to the begining of the array (shifting the contents of the array toward the beginning from the deleted index to the end of the array (or to the end of the non-deleted elements). Care must be taken not to "delete" the same element twice.
To achieve the latter, you will have to either keep track of which position was moved by how many elements. Alternatively, update the index array to decrement indices larger than the current one (to accomodate the now compacted array) -- in this case you coul start by sorting the index array (possibly removing duplicates at the same time) and just keep track of how many positions have been shifted so far
try this
public static void main(String[] args) {
Integer[] a = {1,2,3,4,5,6,7,8,9};
Integer[] b = {2,3};
System.out.println(Arrays.asList(deleteArrayUsingIndices(a, b)));
}
^ for testing
public static Integer[] deleteArrayUsingIndices(Integer[] source, Integer[] indices)
{
ArrayList<Integer> sourceArr = new ArrayList<Integer>(Arrays.asList(source));
ArrayList<Integer> toDelete = new ArrayList<Integer>();
for (int i:indices)
{
try {
toDelete.add(sourceArr.get(i));
} catch (Exception e) {}
}
sourceArr.removeAll(toDelete);
return sourceArr.toArray(new Integer[sourceArr.size()]);
}

Ordered insertion working sporadically with primitive types & strings

For an assignment, we've been asked to implement both ordered and unordered versions of LinkedLists as Bags in Java. The ordered versions will simply extend the unordered implmentations while overriding the insertion methods.
The ordering on insertion function works... somewhat. Given a test array of
String[] testArray= {"z","g","x","v","y","t","s","r","w","q"};
the output is
q w r s t y v x g z
when it should be
g q r s t v w x y z
However, the ordering works fine when the elements aren't mixed up in value. For example, I originally used the testArray[] above with the alphabe reversed, and the ordering was exactly as it should be.
My add function is
#Override
public void add(E e){
Iter iter= new Iter(head.prev);
int compValue;
E currentItem= null;
//empty list, add at first position
if (size < 1)
iter.add(e);
else {
while (iter.hasNext()){
currentItem= iter.next(); //gets next item
//saves on multiple compareTo calls
compValue= e.compareTo(currentItem);
//adds at given location
if (compValue <= 0)
iter.add(e, iter.index);
else //moves on
currentItem= iter.next();
}
}
}
The iterator functionality is implemented as
//decided to use iterator to simplify method functionality
protected class Iter implements Iterator<E>, ListIterator<E>{
protected int index= 0;
protected Node current= null;
//Sets a new iterator to the index point provided
public Iter(int index){
current= head.next;
this.index=0;
while (index > nextIndex()) //moves on to the index point
next();
}
public void add(E e, int index){
size++;
Iter iterator= new Iter(index);
Node node= new Node();
Node current= iterator.current.prev;
node.next= current.next;
node.prev= current;
node.next.prev= node;
node.prev.next= node;
node.item= e;
}
As it is right now, the only things being used are primitive types. I know for objects, a specific comparable class will have to be written, but in this case, String contains a compareTo() method that should give correct ordering.
By chance, a classmate of mine has a similar implementation and is returning the same results.
Using natural ordering, how can I resolve this problem?
Three things about your add() function jump out at me:
It should exit the loop as soon as it inserts the new value; this might not actually be a problem, but it is inefficient to keep looking
You call next_item at the top of the loop, but call it AGAIN if the value isn't added
If your list has just 1 value in it, and you try to add a value larger than the one currently in the list, won't the new value fail to be added?

Resources